Smartphone-powered 3D printer: http://www.olo3d.net/ Autodesk Project Escher See draft paper. See MMathPhys oral presentation Coding theorem connects probability and complexity AIT differs fundamentally from Shannon information theory because the latter is fundamentally a theory about distributions, whereas the former is a theory about the information content of individual objects (see Descriptional complexity). If one assumes that the probability of generating a binary input string of length l issimply (which is true for prefix codes, see Appendix A) then the most likely way to obtain output by random sampling of inputs is with the shortest program that generates it, a string of length . Direct application of these results fromAIT to many practical systems in science or engineering suf-fers from a number of well known problems: The way these problems are tackled is described in the next sections. Coding theorem for computable functions. We begin with a weaker form of the coding theorem, applicable to real world (computable) functions Eq. (2) where is the complexity of output , given the map and the value which is (see M. Li and P.M.B. Vitanyi. An introduction to Kolmogorov complexity and its applications. Springer-Verlag New York Inc, 2008., and Lecture notes on descriptional complexity and randomness). We provide a derivation of equation (2) in Appendix B, using standard results from AIT such as: The complexity of a whole set is often much less than the complexity of individual members of the set (9). Informally , can be viewed as the length of computer code required to specify , given the function and value are already pre-programmed in to the computer. Note that equation (2) is just an upper bound. In contrast to the the full coding theorem of equation (1), there is no lower bound. We mainly consider maps that comply witha few simple restrictions: If instead the map is allowed to contain arbitrary amounts of information, then the map could assign arbitrary probabilties to the outputs, and any coding theorem-like behaviour would be lost. We discuss this fixed complexity condition further in Appendix C. as we show in Appendix E , it turns out that a reasonably broad range of complexities will follow under quite general conditions for fixed complexity maps. They use the above conditions to argue that . Approximately computing . although uncomputable has been approximated using standard compression algorithms. is used to denote some real-world (computable) approximation to . Importance of terms. Experimental results on apply the coding theorem to short strings suggest that the terms are not very important. where the constants , and depend on the mapping, but not on . We call this upper bound of the probability simplicity bias: High probability outputs must be 'simple', complex outputs must have exponentially lower probabilities. In contrast to the full coding theorem, the lack of a lower bound means that simple outputs may also have low probabilities. They off estimates for and in appendix D. In Appendix B, they also argue that: the upper bound of equation (3) should be tight for most inputs, but weak for many outputs. See them! Discrete RNA sequence to structure mapping Coarse-grained ordinary differential equation Coarse-grained stochastic partial differential equation. Black Scholes equation Polynomial curves Random matrix – bias but not simplicity bias Random walk map Logistic map (see Nonlinear maps) Predicting which of two outputs has higher probability Connection to Chomsky hierarchy (see Formal language), Sloppy systems.. Appendix A: AIT Appendix B: Upper and lower bound for computable maps Upper bound: following derivation using Shannon-Fano code as in InfoTheory book. Lower bound: Not sure. Ask!, or read! Page 12. Also many outputs must have probability below their upper bounds. Appendix C: Fixed complexity map Appendix D: Making predictions for P(x) in computable maps Appendix E: Estimating the range of . Arguments based on bounding complexity given the description: map + index of output. This gives upper bounds to min and max complexities to be , and (everything up to additive constant, ). For the max, we also need a lower bound, and this is given by the well known fact in AIT that if one has different strings, not all of them can have complexity lower than , as there are not enough such descriptions. In fact, most of the strings need to have a complexity of . Appendix F: Approximations to . Appendix G: Simplicity bias and system size Appendix H: On the intuitive connection of probability and complexity Appendix I: Simplicity bias in the logbinomial distribution Appendix J: Predicting the number of outputs, . By fitting and estimating from the known details of the system. Appendix K: Further examples and figures Continuous systems are sampled and discretized to create the output. L-systems, Circadian rhythm Cell cycle Feed forward network. Sample networks, measure complexity of given network. by entropy of the distribution of outputs. Logic gate Appendix L: Histograms of complexity Abiogenesis or biopoiesis or OoL (Origins of Life), is the natural process of life arising from non-living matter, such as simple organic compounds. Self-organization is expected to play a major role, both in the origin of life and in its subsequent Evolution. Work of M. Eigen See book by Prigogine, etc. See references at the end of page 54 in here On Nature’s Strategy for Assigning Genetic Code Multiplicity Origin and evolution of the genetic code: the universal enigma Self-Organisation and Evolution of Biological and Social Systems ‘RNA world’ inches closer to explaining origins of life Kauffman talk. Ideas: As evolution progresses it creates new opportunities and richer context for evolution to find evolve further. Function can be defined as that subset of causal effects that contribute to causing a particular goal. In biology that goal is survival. The appropriate language in evolution goes beyond cause and effect, and includes enabling. Organisms are Kantian wholes, where the parts exist for and by means of the whole. The rest of the ideas seem to basically say that biology and evolution is too complex to (fully) describe with mathematical laws. Maybe we can understand the adjacent possible, look at convergence evolution.... An Achlioptas process is a type of Explosive percolation, also known as -edge processes, that involve choosing candidate edges uniformly at random between any pair of nodes (compare with other Spanning cluster-avoiding process)and applying a rule to select which one is actually chosen. These have been proven to be continuous in the thermodynamic limit, for a fixed . They are generalizations of Erdos-Renyi Random graphs. The first proposed type, were proposed (in Explosive Percolation in Random Networks) and thought to maybe show a discontinuous phase transition. Achlioptas processes phase transitions are continuous It has now been show that the Percolation phase transition for Achlioptas processes (and in fact a more general class of k-vertex rule percolation process) is continuous (in the thermodynamic limit), but very steep (see Explosive Percolation Transition is Actually Continuous and Achlioptas process phase transitions are continuous). One can prove the continuity by looking at the asymptotic effect of removing a single link, as the total size goes to infinity. However, Oliver Riordan and Lutz Warnke proved it by proving, in essence, that the number of subcritical components that join together to form the emergent macroscopic-sized component is not sub-extensive in system size. In the words of Friedman and Landsberg, Achlioptas Processes do not lead to the build-up of a “powder keg" (which is a type of cluster configuration that does lead to discontinuous transitions). However, the model can be generalized to one that shows genuinely discontinuous transitions (see Anomalous critical and supercritical phenomena in explosive percolation). One way to achieve discontinuity is actually to allow the number of edges in the rule, to scale up with , the network size, in a certain way. -edge Achli Achlioptas process is the simplest type: Start with N isolated nodes and add undirected, unweighted edges one at a time. This is done by choosing, at each step, two possible edges uniformly (and independently) at random from the set of possible {edges between a pair of distinct nodes}. One adds only one of these edges, making a choice based on a systematic rule that affects the speed of development of a GCC. -edge rules are defined similarly. Product rule. One choice that yields "explosive" percolation is to use the so-called "product rule", in which one always retains the edge that minimizes the product of the sizes of the two components that it merges (with an arbitrary choice when there is a tie). Sum rule. The size of the new component
formed is minimized. Bohman–Frieze (BF) rule. edge 1 is chosen if it joins two isolated vertices, and edge 2 otherwise. A selection rule can be classified as a bounded-size or an unbounded-size rule. In a bounded-size selection rule, decisions depend only on the sizes of the components and, moreover, all sizes greater than some (rule-specific) constant are treated identically. There are also the more general -edge rules based on chosing edges at each step, and selecting one (or potentially more). See more rules here: Explosive percolation: Unusual transitions of a simple model The Evolution of Random Graphs (product rule first suggested here). Avoiding a giant component. Bounded-size rules are able to switch the percolation threshold. Birth control for giants. The percolation transition is strongly conjectured to be continuous for all bounded-size rules A colloidal particle that is an active system. Common types are catalytic colloids, that catalyze some reaction, often due to its surface chemical properties or those of a coating. This is often done so that the particle is self-propelling. See also: Individual and collective behavior of artificial swimmers: "Janus particles" Active matter refers to a type of bulk matter, often soft condensed matter that is an Active system, i.e. that it produces its own driving energy (for example, self-propelling particles, and micro-swimmers). Driven matter is a closely related type of matter, where the system is externally driven. The Hydrodynamics of Active Systems Life at low Reynold's numbers, see Low Reynolds number See also Complex fluid dynamics, Colloid physics Swimming at low Reynolds number: Stokes equation Important consequence for swimmers: Scallop theorem (see Kinematic reversibility in fluid dynamics) Swimmer models Used to calculate hydrodynamic interactions, for instance Point-force problem for Stokes equation, can be solved using its Green function, often called the Oseen tensor (see here): where the is often omitted in the definition of the Oseen tensor. Using the Green function to construct general solution, one can construct a multipole expansion. As swimmers (in average, in steady state) don't accelerate, the fluid isn't exerting a net force on them, so they can't be exerting a net force on the fluid (Netwon's third law). Therefore the monopole term (called the Stokelet) isn't present. An exception for this is the relatively large microorganism Volvox, for which gravity force is significant, giving a net force to the problem and creating a Stokelet flow. Therefore, the dominant term is generally the dipolar term: where is the velocity field, and It is conventional to let: where the addition of doesn't change the velocity field because . is called the stresslet, and is called the rotlet. The rotlet is zero if the net torque on the fluid is zero, which it is for active microswimmers. Note that the dipolar flow has nematic symmetry; this is important in the collective behavior of active swimmers. We can have two kinds of dipolar flow around a swimmer: Physics of Microswimmers – Single Particle Motion and Collective Behavior In pursuit of propulsion at the nanoscale Biphasic, Lyotropic, Active Nematics Cytoplasmic streaming A physical perspective on cytoplasmic streaming Cytoplasmic streaming in plant cells emerges naturally by microfilament self-organization Spontaneous Circulation of Confined Active Suspensions Instabilities, pattern formation, and mixing in active suspensions A system with constituents that are able to produce their own energy. For instance, they are often self-propelling. See also the wiki article. Due to the energy consumption, these systems are intrinsically out of thermal equilibrium. If the system is made of bulk matter, it's called Active matter. Examples of active systems are schools of fish, flocks of birds, bacteria, artificial self-propelled particles, and self-organising bio-polymers such as microtubules and actin, both of which are part of the cytoskeleton of living cells. Dry active systems A prominent example of active system are Active colloids Biophysics (biological systems are active systems). Flocks, herds, and schools: A quantitative theory of flocking. See Complex systems DNA nanomachines in DNA nanotechnology See Dynamical Instability in Boolean Networks as a percolation Problem, Boolean network New paper: Network Structure and Activity in Boolean Networks Activities and Sensitivities in Boolean Network Models Boolean functions in which few variables have high importance and most other variables have low importance play a role in eliciting order from Boolean networks. We should mention in passing that much of the discussion in this Letter can be formulated in terms of spectral methods or harmonic analysis on the cube. Boolean function derivative Activity Sensitivity For a random Boolean function with bias (so that each bit in the truth table is with probability and otherwise), the probability that two Hamming neighbors are different is equal to , since one can be (with probability ) and the other 0 (with probability ), and vice versa. From this one can see that , and , where means expectation value w.r.t. the prob. distribution of the truth tables. We can then conclude that highly biased functions ( far away from 0.5) are expected to have low average sensitivity. For a Boolean function , a canalizing variable is a variable that determines (canalizes) the value of if it has a given value. See the article for more precise definition. A random Boolean function with a single canalizing variable, it is shown here that the expected activity of the canalizing variable is , while that of the rest of the variables is The average sensitivity (when averaged over all the functions in the network) appears to be a good parameter for predicting whether the dynamics of the Boolean network are ordered or chaotic. This can be observed by looking at Derrida curves. They can always be drawn with the vertices arranged so that all edges point downward as in Fig 1. Also all cycles that can be arranged like this are acyclic. From the proof of this fact one can deduce an algorithm for finding if a network is acyclic or not: Furthermore, the adjacency matrix of such a graph can always be made upper-diagonal with zeros on diagonal (as no self-loops). The eigenvalues of an acyclic graph are thus all zero. One can also show the opposite, thus: Biogerontology Research Foundation Why Do We Age? The Molecular Mechanisms of Ageing The 9 hallmarks of ageing Great Desire for Extended Life and Health amongst the American Public Aging: where physics meets biology (see comments). Further commentary here (see my comment there too). See here and The thermodynamics of life Agriculture is the cultivation of animals, plants, fungi, and other life forms for food, fiber, biofuel, medicinal and other products used to sustain and enhance human life (https://en.wikipedia.org/wiki/Agriculture) Agronomy: agriculture of plants An algebra is a family of subsets of a set s.t.: If the algebra is closed under countable unions (not just finite), then it is a Sigma-algebra In mathematics, and more specifically in abstract algebra, an algebraic structure is a set (called carrier set or underlying set) with one or more finitary operations defined on it that satisfies a list of axioms. https://en.wikipedia.org/wiki/Algebraic_structure Group-like algebraic structures Ring-like algebraic structures Lattice-like algebraic structures written in Python See Descriptional complexity and MMathPhys oral presentation. See also Information theory, Theory of computation, Complexity theory, and Computational complexity Good lecture notes for AIT: http://www.cse.iitk.ac.in/users/satyadev/a10/a10.html See Elements of information theory by Cover and Thomas (chap 14) Conditional Kolmogorov complexity is the pairing function (see Computability theory). The conditional Kolmogorov complexity is often defined as in Def. 2.0.1, but with is , the length of . Universality of Kolmogorov complexity For sufficiently long x, the length of this
simulation program can be neglected, and we can discuss Kolmogorov
complexity without talking about the constants. Note in the book on info theory, they use the ceiling function for the {number of bits in a binary representation of a number}; however, as mentioned here that fails for multiples of , so we need to use Upper bound on Kolmogorov complexity where Lower bounds on Kolmogorov complexity Kraft inequality Relation to entropy as . See proof in the book (uses Kraft's inequality, Jensen's inequality, and the concavity of the entropy) Therefore the average Kolmogorov complexity of the string approaches the entropy of the random variable from which the letters of the string are sampled. The compressibility achieved by the computer goes to the entropy limit. Theorem 14.4.3 There are an infinite number of integers such that . Theorem 14.5.1 Let be drawn according to a Bernoulli process. Then For example, the fraction of sequences of length n that have complexity less than n − 5 is less than 1/32. This motivates the following definition. Definitions of algorithmic randomness, incompressibility. Strong law of large numbers for incompressible sequences In general, we
can show that if a sequence is incompressible, it will satisfy all computable
statistical tests for randomness. (Otherwise, identification of the test that x
fails will reduce the descriptive complexity of x, yielding a contradiction.)
In this sense, the algorithmic test for randomness is the ultimate test,
including within it all other computable tests for randomness. We now remove the expectation from Theorem 14.3.1 Universality of the universal probability Remark. Bounded likelihood ratio The likelihood ratio is bounded, and doesn't go to or for any , thus no universal probability can be totally discarded relative to any other in hypothesis testing. This is essentially because any universal computer can simulate any other, and in that sense the probability distribution obtained by feeding random input into one is also contained in the distribution obtained in the other. In that sense we cannot reject the possibility that the universe
is the output of monkeys typing at a computer. However, we can reject
the hypothesis that the universe is random (monkeys with no computer). 😮 The example indicates that a random input to a computer is much more
likely to produce “interesting” outputs than a random input to a typewriter.
We all know that a computer is an intelligence amplifier. Apparently, it
creates sense from nonsense as well. Epimenides liar pradox Godels incompleteness theorem Halting problem Related: Berry's paradox and Bechenbach's paradox Definition Properties: 1. is noncomputable 2. is a "philosopher's stone", or an oracle. Knowledge of to bits can be used to prove any theorem for which {a proof expressible in less than bits exists}. 3. is algorithmically random. Theorem 14.8.1. cannot be compressed by more than a constant; that is, there exits a constant such that for all the universal gambling scheme on a
random sequence does asymptotically as well as a scheme that uses prior
knowledge of the true distribution! ...... Proof involves an extension of the {tree construction used for Shannon-Fano-Elias codes for computable probability distributions} to the uncomputable universal probability distributions. As stated in the proof in the InfoTheory book, "However, there is no effective procedure to find the lowest depth node corresponding to x". This means that the coding they use in the proof is incomputable. However, they show it exist, and that it can be decoded in finite time, giving a description of the string. See also Sequence spaces http://www.scholarpedia.org/article/Algorithmic_information_theory The discovery of algorithmic probability Seems like vary nice read. Solomonoff's theory of inductive inference An Introduction to Kolmogorov Complexity and Its Applications (1 cr) Algorithmic Learning Theory (ALT) 2016 Expanded and improved proof of the relationbetween description complexity and algorithmicprobability http://www-igm.univ-mlv.fr/~berstel/Articles/2010HandbookCodes.pdf Also called imperative knowledge in Computer science https://www.youtube.com/watch?v=gwlevtaC-u0&list=PL6ED884C7AEE68027 Discrete algorithms conference papers [[http://www2.idsia.ch/cms/fun16/ FUN with biological algorithms FUN with combinatorial algorithms
FUN with cryptographic algorithms FUN with distributed algorithms
FUN with game-theoretic algorithms FUN with geometrical algorithms
FUN with graph algorithms FUN with mobile algorithms
FUN with Internet algorithms FUN with parallel algorithms
FUN with optimization algorithms FUN with randomized algorithms
FUN with robotics algorithms FUN with space-conscious algorithms
FUN with string algorithms FUN with visualization of algorithms https://www.youtube.com/channel/UCC_RpWFSbwHib_LLhHJwB3w/videos?shelf_id=0&view=0&sort=dd https://en.wikipedia.org/wiki/Alzheimer%27s_disease Alzheimer's and the Brain Relations with chromosome 21. Alzheimer's disease seems to be related to the accumulation of plaques in the brain, and with tangles. There's many many things, thus it makes sense we look at what happens when we got more and more things Analysis of the Computational complexity of Algorithms, i.e. find out how much time, and how much memory does an algorithm take to run. Analytic Combinatorics, Part I (Analysis of Algorithms) Already recognized as important by Babbage, Turing. However, the modern field of analysis of algorithms was started by Donald Knuth, who recognized that mathematics had the tools to analyze algorithms. Things like the following are useful tools for this: Books: four volumes of The art of computer programming. Analytic combinatorics is a calculus (set of mathematical tools) for analyzing properties of large combinatorial structures. Book website of book videocourse Result: A direct derivation of a GF equation (implicit of explicit), i.e. an equation that the Generating function must satisfy. Classic next steps: Result: Asymptotic estimates that quantify the desired properties. Video course Analytic Combinatorics, Part II (Analytic Combinatorics) In Coursera: Analytic Combinatorics Applications: Analysis of algorithms, Random deterministic automata, ... Studies the structure of organisms. Goes together with Physiology, which studies the function of organisms. BioDigital- 3D Human Visualization Platform for Anatomy and Disease Epithelial tissue. Covers stuff Connective tissue. Connects stuff (like bones and muscles). Defined by presence of an extracellular matrix. Blood and fat are thus considered connective tissue. Nerve tissue. Neurons, glial cell. Part of the vision of AugMath 3Blue1Brown
https://www.youtube.com/watch?v=uXPb9iBDwsw
https://www.youtube.com/watch?v=RU0wScIj36o
https://github.com/3b1b/manim Jim Blinn, algebraic ballet http://kotaku.com/ibm-is-making-sword-art-online-for-real-1760758238 ergo proxy Metropolis. Last scene: Metropolis (2001) - I can't stop loving you Loved this when I was very small: https://en.wikipedia.org/wiki/Montana_Jones Anthropology is the study of humans and their societies in the past and present. Bioviva FIRST GENE THERAPY SUCCESSFUL AGAINST HUMAN AGING About Deep Knowledge Life Sciences (DKLS), BGRF and Avi Roy, SENS and Aubrey de Grey http://www.longevityreporter.org/ https://global-longevity-initiative.webflow.io/ PREVENT . RESTORE . PRESERVE Do not go gentle into that good night,
Old age should burn and rave at close of day;
Rage, rage against the dying of the light.
— Dylan Thomas NLA, CASMI, Oxford and BGRF to develop the Global Healthspan Extension Initiative two of her own company’s experimental gene therapies: Telomeres are short segments of DNA which cap the ends of every chromosome, acting as ‘buffers’ against wear and tear. They shorten with every cell division, eventually getting too short to protect the chromosome, causing the cell to malfunction and the body to age. “Current therapeutics offer only marginal benefits for people suffering from diseases of aging. Additionally, lifestyle modification has limited impact for treating these diseases. Advances in biotechnology is the best solution, and if these results are anywhere near accurate, we’ve made history” Note: this is awesome It remains to be seen whether the success in leukocytes can expanded to other tissues and organs, and repeated in future patients Gene therapy to save the world 10 responses to “Hacking Aging”
What would you say if I told you that aging happens not because of accumulation of stresses, but rather because of the intrinsic properties of the gene network of the organism? I’m guessing you’d be like: :o . https://checkvist.com/checklists/563670# http://beyondplm.com/ PLM/PDM in cloud, in blockchain. supply chain management. healthcare system. SAP, grabcapd, onshape Ethereum Provenance Ascribe, legal stuff A device or piece of equipment designed to perform a specific task, typically a domestic one. Topography studies features of the surface of the Earth, as well as other planets. These can be described as landscapes Percolation models and Percolation theory have been applied to understand these. A landscape is a height profile usually defined on a square lattice where each cell’s elevation value at position x represents the average elevation over the entire footprint of the cell (site). Now imagine that the water is dripping uniformly over the landscape and fills it from the valleys to the mountains, letting the water flow out through the open boundaries. During the raining, watershed lines may also form which divide the landscape into different drainage basins. These are important in geomorphology in e.g., water management [113] and landslide and flood prevention [114] it is possible to determine the watershed lines based on the iterative application of invasion percolation [115]. Another kind of percolation that can occur: Raising the water level makes lakes join together, and eventually a lake that spans the whole landscape may form. However, whether the percolation transition is critical or not depends on the properties of the surface landscape (in particular on correlation functions). These ideas have been applied to study the topography of the Earth, where they found that the present sea level is a critical level in their model. This finding elucidates the origins of the appearance of ubiquitous scaling relations observed in the various terrestrial features on Earth. Oxford course
Syllabus: Review of core complex analysis, especially continuation, multifunctions, contour integration, conformal mapping and Fourier transforms. Riemann mapping theorem (in statement only). Schwarz-Christoffel formula. Solution of Laplace's equation by conformal mapping onto a canonical domain. Applications to inviscid hydrodynamics: flow past an aerofoil and other obstacles by conformal mapping; free streamline flows of hodograph plane. Unsteady flow with free boundaries in porous media. Application of Cauchy integrals and Plemelj formulae. Solution of mixed boundary value problems motivated by thin aerofoil theory and the theory of cracks in elastic solids. Reimann-Hilbert problems. Cauchy singular integral equations. Transform methods, complex Fourier transform. Contour integral solutions of ODE's. Wiener-Hopf method. Jordan's lemma Also use tools from Design optimization, including Genetic algorithms applied to grid structure optimization which look really cool. Simulating rain for architecture Programming architecture is a company that solves problems in the design and construction phase of complex architectural objects. Offers software and knowledge. https://www.youtube.com/watch?v=YxJJeU9mVSU&list=PLhOObpoQndRmAGJh1mvnE6ye0z-bmIhxF See MMathPhys oral presentation. Following The Arrival of the Frequent: How Bias in Genotype-Phenotype Maps Can Steer Populations to Local Optima (remember notes here are complementary to paper, and don't cover all of its content, only those parts where I thought were gaps for me to understand it), we can study the effect of the structure of the genotype-phenotype (GP) map, in the model of Evolution known as the Wright-Fisher model (see Population genetics). We use the Haploid Wright-Fisher model with selection, where for each individual in the generation at time , we choose a single parent from the individuals at the previous generation , according to the rule described there. We then include the effect of mutations, by assigning to the new individual a genotype of length as follows: Note: the genotype is defined as a sequence of letters taken from an alphabet of letters. See Mean field approximation to average number of phenotypes discovered in Wright-Fisher model , some equations are found there. The main result is that the expected number of individuals with genotype that arises at generation can be approximated as under certain assumptions, explained in that tiddler. If , the population naturally spreads over different genotypes, a regime called the polymorphic limit. See Polymorphic limit (Wright-Fisher model) tiddler for details. Main points: To model neutral exploration, we let , where is a Kronecker delta The time when {{the probability of having discovered a p-type individual (produced a p-type offspring)} is } is found by: Neutral spaces can be astronomically large, much bigger than even the largest viral or bacterial populations (see this paper). In that case, the local neighborhood of the population may not be fully representative of the neighborhood of the entire space. This scenario can most easily understood in the monomorphic limit: when mutants are rare, Now, the (average) rate of neutral mutations (per individual) is , as is the probability that a mutation is neutral. See more in the Monomorphic limit (Wright-Fisher model) tiddler, and at the paper. We can see that in the large genome limit, the phenotype is found quicker as the population increases. However, when the population becomes so large that all the 1-mutation neighbourhood is thoroughly explored (while still staying in the monomorphic limit), the discovery time saturates because increasing the population doesn't increase the number of explored phenotypes (during a fixation period). These results suggest that for intermediate there should be a smooth transition between these two regimes. To quantify the crossover we introduce a factor . [See Figure 1.] The genotype is defined by: The number of available genotypes is thus . Apart from specifying and , we need to specify the set which is the fraction of genotypes mapping to phenotype . The map is otherwise random. In this setting, is a good approximation if , where is the number of genotypes mapping to phenotype . These also require (i.e. {the number of phenotypes } is much less than {the number of genotypes}, i.e the map is very many-to-one). There is also a percolation threshold at a critical frequency () , so that only phenotypes with have "completely" connected neutral spaces (in the network where edges correspond to a single-point mutations, or genotypes separated by a Hamming distance of ). See the theory of percolation in Network science's Newman's book, Oxford notes, and problem sheets. See also Random Induced Subgraphs of Generalized n-Cubes. Standing variation. Adaptation from standing genetic variation. RNA genotypes of length made up of nucleotides G,C,U and A. The phenotypes are the minimum free-energy secondary structures for the sequences, which can beefficiently calculated (see Fast Folding and Comparison of RNA Secondary Structures). The number of genotypes grows as ,while the number of phenotypes is thought to grow roughly as (see Robustness and Evolvability in Living Systems - Andreas Wagner). Also: From sequences to shapes and back: a case study in RNA secondary structures. - pdf. Epistasis can lead to fragmented neutral spaces and contingency in evolution The Ascent of the Abundant: How Mutational Networks Constrain Evolution Discovery times are slower than in the random GP map. This reflects the internal structure of the RNA: similar genotypes typically have similar mutational neighbourhoods (see Exploring phenotype space through neutral evolution.), and so the population needs to neutrally explore longer in order to find novelty. Comment: The fact that this discussion requires speaking about a change in the environment is what makes "the arrival of the frequent" a non-equilibrium effect, I think. Compare this with the survival of the flattest which is an equilibrium effect. We need to have because the probability of fixation is (see here (page 201) or here, or here (page 326)): So for , . We need to have so that the probability of fixation of the two alternative phenotypes is considerably larger than that of the initial phenotype , for which . In here (page 321) an expression for the case of very large is derived without using diffusion approximation. A more frequent phenotype (, with much larger than competitor ) is favoured via two related effects: Another effect that often positively correlates with the frequency of a phenotype is Mutational robustness (see Robustness and evolvability: a paradox resolved and Epistasis can lead to fragmented neutral spaces and contingency in evolution). Mutational robustness has been shown to offer selective advantage at high mutation rates, because phenotypes which are not robust will often mutate to deleterious mutants and probably go extinct, while phenotypes which are robust will survive. This effect is called the "survival of the flattest", as robust phenotypes correspond to "flat" regions in the fitness landscape (see the paper). This effect can also be understood in terms of free fitness (see Free fitness that always increases in evolution), in analogy to "free energy" in Statistical physics (see The application of statistical physics to evolutionary biology), as it incorporates an entropy-like term accounting for the size of the neutral space of the phenotype. However, {the arrival of the frequent} is a non-equilibrium effect (unlike {the survival of the flattest} which assumes equilibrium or pseudo-equilibrium). This is because it describes how discovery times and discovery frequency depend on the phenotypes frequency (), after a change in the environment, when the system is out-of-equilibrium. For the monomorphic limit (small mutation rate, in Figure 4.), the probability..... Genotype-phenotype (GP) maps are observed to be highly biased: Some phenotypes are realised by orders of magnitude more genotypes than most other phenotypes. Large bias observed in the GP maps translates into a similar order of magnitude variation in the median discovery times for a range of population genetic parameters. However, correlations in the GP map can cause the relation between and phenotype frequency to have large fluctuations (for example, (which determines ) can be even if is quite large). For the GP mas studied, the strong bias in the GP map leads to a systematic ordering of the median discovery times of alternative phenotypes, an effect that we postulate may hold for other GP maps as well. The correlations in the RNA GP maps mean that close genotypes have similar neighbourhoods, so that one needs to explore further to reach truly new {genotype neighbourhoods}. This is why the fitting parameter is smaller than the value expected in the mean-field approx. This is also why for very similar values of , there is a range of values of spanning about order of magnitude. This probably means that it takes up to generations to {reach truly novels genotypic neighbourhoods} in the {neutral exploration}. Still, the many orders of magnitude range observed in dominates the variation in phenotype discovery times (), providing a a posteriori justification for the mean-field approximation. It is reasonable to expect all these features to arise in other GP maps found in natural (or in artificial systems), including biological systems. Taken together, these arguments suggest that the vast majority of possible phenotypes may never be found, and thus never fix,even though they may globally be the most fit: Evolutionary search is deeply non-ergodic (I think that this is in the sense that we don't quite reach equilibrium on reasonable time scales, or that the observation time-scales needed for the system to appear ergodic are much larger than those used in experiment. However, this is also true in many other systems like particles in a gas; however, this other system doesn't show the bias needed for the Arrival of the frequent effect). When Hugo de Vries was advocating for the importance of mutations in evolution, he famously said ‘‘Natural selection may explain the survival of the fittest, but it cannot explain the arrival of the fittest’’ [2]. Here we argue that the fittest may never arrive.Instead evolutionary dynamics can be dominated by the ‘‘arrival of the frequent’’. Older comments: So I think what he was talking about is that we can construct a network of phenotypes which is a projection of the network of genotypes via the genotype-phenotype map. Links in the network of genotypes are possible mutations, and all genes have the same degree. However, not all nodes have the same degree in the network of phenotypes. We can then apply results from network theory of the steady distribution for a random walker on a network. So this sets a bias on the distribution on the phenotype network. Over this bias there will be the fitness surface. That subtle aspect of the human mind committed to the creation of new structures in the World. That is, by virtue of its extreme complexity, human brains are able to catalyze equally complex structures out the molecular chaos that excites the neurons in random ways. These structures can resonate with the brain/minds of other people in ways that evoke emotion, and may be thus called beautiful by these people. For this reason art may be considered as a language or a means for communicating emotions. Art itself has been studied, but even more often, it has been practised, creating a vast amount of works of art throughout human history. The classification of these works, though of course, blurry, is based in part on the medium the art is expressed on, and which senses are primarily used to experience it. Other interesting definitions discussed here: Is Programming Art? - MPJ's Musings - FunFunFunction #33 Portal:Contents/Culture and the arts See also Technology & Engineering Fund artists! https://www.patreon.com/ Computational intelligence - Scholarpedia Oxford course (with video) Deep learning. Youtube playlist by mathematicalmonk Hugo latochelle YB videos Read Neural Turing machines paper See also: Evolutionary computing, Bio-inspired computing, Sloppy systems Artificial intelligence (AI) has the overall goal of understanding and engineering intelligence, behaviour that involves understanding, and higher cognitive functions. It is a broad and very interdisciplinary field. It feeds to and from Machine learning, Logic, Cognitive science, Neuroscience, etc. Oxford's society OxAI Machine intelligence, is essentially a synonym of AI, but with the connotation of using machines and computers to create and understand intelligence. The biggest part of it, Machine learning, deals with the problem of extracting features from data (learning) so as to solve (mostly) predictive tasks. Uses Miscellaneous notes from first Nando's first deep learning lecture Challenges: One-shot learning, multi-task & transfer learning, scaling and energy efficiency, ability to generate data (e.g. vision as inverse graphics), architectures for AI. See more at Machine learning Why Deep Learning models perform so well? Seems to be a result of: Eric Drexler - A Cambrian explosion in Deep learning A Gradient Descent Method for a Neural Fractal Memory https://www.oreilly.com/ideas/the-current-state-of-machine-intelligence-2-0 http://tuvalu.santafe.edu/~walter/AlChemy/alchemy.html Arrival of the fittest The modern evolutionary synthesis based on Population genetics has an existence problem: it assumes the existence of individuals, genes, alleles, etc. They propose a simple model abstracting from chemistry to explain the Self-organization of self-maintaining and self-replicating structures, necessary for the origin of life Darwinian Evolution. They point out related work in autopoiesis, concurrent computation, a "chemical abstract machine", autocatalytic reaction networks. See Artificial and machine intelligence and Machine learning Approaches Connectionists. Deep learning, artificial neural network. Backpropagatiob Evolutionaries Bayesians. Bayesian networks. Symbolists Analogizers. Universal AI theory THEORY OF UNIVERSAL LEARNING MACHINES & UNIVERSAL AI Universal Artificial Intelligence http://link.springer.com/chapter/10.2991/978-94-91216-62-6_5#page-1 http://people.idsia.ch/~juergen/goedelmachine.html Project Malmo, which lets researchers use Minecraft for AI research, makes public debut thinking perception action. Loops b?w them Google/Deepmind The meaning of AlphaGo, the AI program that beat a Go champ Is AlphaGo Really Such a Big Deal? Google’s DeepMind AI group unveils health care ambitions http://deepmind.com/health etc etc Vicarius Facebook See New advances in deep learning and people I follow on Twitter NNAISENSE leverages the 25-year proven track record of one of the leading research teams in AI to build large-scale neural network solutions for superhuman perception and intelligent automation, with the ultimate goal of marketing general-purpose Artificial Intelligences. https://github.com/baidu-research/warp-ctc A Roadmap towards Machine Intelligence http://people.idsia.ch/~juergen/rnnai2016.html US White House - Preparing for the Future of Artificial Intelligence Aka artificial neural network.. A particularly useful way of representing functions, for problems in Machine learning. It is a very good model for many problems, and learning algorithms produce very good results with them. In particular deep learning (which uses ANNs with many layers). Hugo Larochelle class videos on [2.9] Neuron has: 1) inputs 2) weight vectors, that multiplies the input vector or activation vector of hidden layers. 3) bias, that is added to result 4) activation function takes as argument the result of the above (called pre-activation or input activation) 5) The result (called activation) may be the input of other neurons in the next layer, in a multilayer feedforward neural network. 6) The activation of the last layer, is the output Overall... we are multiplying by matrices and applying simple nonlinear function Universal approximator theorem See paper mentioned in Hugo's vid, single hidden layer ANNs can approximate any continuous function with sufficiently many neurons in the hidden layer. There may not be a learning algorithm to find the right parameter set though. Learning by minimizing cost funciton Using SDG. An efficient algorithm to compute the gradients of the loss function w.r.t. the ANN's parameters is backpropagation. Backpropagaion. It effectively uses the chain rule to compute the gradient w.r.t. parameters at one layer with the values of the gradients w.r.t. parameters at the layer above (deeper). Why backprop is more efficient than naive approach Derivatives wrt the input give you a way of knowing which part of the input is determining the classification, i.e. where is the cat in the image, for example A Neural Network in 11 Lines of Python More models, and generalizations Backpropagation, temporal networks, etc.. Visualizing and Understanding Deep Neural Networks by Matt Zeiler Physical implementations: Chemical implementations of neural networks and Turing machines More Layerless neural networks? See Chico Calmagro's work with Ard Louis. On the complex backpropagation algorithm Neural networks for control systems—A survey Genetic deep neural networks using different activation functions for financial data mining Structure Discovery of Deep Neural Network Based on Evolutionary Algorithms Genetic algorithms for evolving deep neural networks Implementation of Evolutionary Algorithms for Deep Architectures See ideas here: Idea for neural network for chemical synethesis and manufacturing etc. Facebook post: https://www.facebook.com/guillermovalleperez/posts/10153853693416223? Neural networks and physical systems with emergent collective computational abilities Spin-glass models of neural networks See Measures and metrics for networks Homophily or assortative mixing is a bias in favour of connections between network nodes with some similar characteristics. Assortative mixing by enumerative characteristics Enumerative (a.k.a categorical) characteristics are those where the possible values don't have any particular metric for being close (i.e. a distance function). Eg.: gender, school Measure given by modularity: where is the Kronecker delta which is 1 if the category of , is the same as that for , and otherwise. Another way to write it turns out to be: where is the fraction of edges that join nodes of type to nodes of type , and is the fraction of ends of edges attached to nodes of type . If we generalize to weighted networks then would be the strength, i.e. the weighted degree; and would be fraction of edge weights joining nodes in the two sets, and would be fraction of half the edge weight assigned to nodes in set . This is just equal to the number of edges connecting vertices of alike type, minus the expected such number for a random network (with degrees distributions for each category fixed). is called the modularity matrix. The normalized modularity (normalized by its maximum value when all edges fall between alike edges is called an assortativity coefficient. Assortative mixing by scalar characteristics By scalar charactersitics we means does that have a metric that gives a notion of closeness so that two nodes can be approximately alike (age, etc.). Measure by a Pearson coefficient (i.e. a normalized covariance) for the correlation of the value of the scalar at the two ends of the edge. The covariance turns out to be: and one can divide by its max value to get an assortativity coefficient. If positive assortativity, sometimes the network is said to be stratified. Others nonlinear kinds of correlations may not be detected by the Pearson coefficient (for example low and high being more often connected with intermediate ). Other information theoretic measures may then be used, or a scatter plot of vs for visual insight, as in figure below: Note that in this figure, the values, 9, 10, 11, 12 are bins, and the positions of points (which represent edges, or pairs of nodes) within each bin is just used to visually aid in identifying blocks with more density. Assortative mixing by degree Degree is special case of scalar because degrees may be close to one another (using usual distance function on integers), so use same formula. If a network shows assortative mixing by degree, it often displays a core (with high density of nodes) and a periphery (with low) structure See (a). If it shows dissasortative mixing by degree, it often shows star like features and is more uniform See (b). There appears to be another definition of a quantity called assortativity in this review. Can also rewrite the assortativity coefficient in this case as a Pearson coefficient for the distribution of the "excess degree" of nodes (i.e. follow an edge to a node and look at distribution of remaining stubs). See page 5 in notes. Notes: Given some network and two partitions (assignment of nodes to categories), we can calculate their modularities, and find which is "more modular" Maximizing is a good way of finding "communities" of densely-connected nodes with sparse connections between those sets. Can define scalar measure of assortativity. See page 3 in notes. Convergence ... Asymptoticness ..., is often more useful in practice, because truncated series give good results, while convergent series often don't unless you take many terms Asymptotic approximation (or asymptotic expansion)... An example is an asymptotic power series See notes for definitions. Big O: as (f could be asymptotic to const*g, or much smaller) Small o: f is strictly much less than g Strict order: f is strictly of order g, i.e. asymptotic to some constant times g. If a function posesses an asymptotic approximation in terms of an asymptotic sequence, then that approximation is unique for that particular sequence. Note that the uniqueness is for a given sequence. A single function may have many asypmtotic approximations, each in terms of a different sequence. Note also that the uniqueness is for a given function: two functions may share the same asymptotic approximation, because they differ by a quantity smaller than the last term included. Two functions sharing the same asymptotic power series, as above, can only differ by a quantity which is not analytic, because two analytic functions with the same power series are identical. Asymptotic approximations can be naively added, subtracted, multiplied or divided, resulting in the correct asymptotic expression for the sum, difference, product or quotient, perhaps based on an enlarged asymptotic sequence. One asymptotic series can be substituted into another, although care is needed with exponentials. Asymptotic expansions can be integrated term by term with respect to resulting in the correct asymptotic expansion of the integral. However, in general they may not be differentiated with safety, i.e., when differentiating there is always the worry that neglected higher-order terms suddenly become important. Optimal truncation: Truncating at the smallest term is known as optimal truncation. So far we have been considering functions of a single variable as that variable tends to zero. Such problems often occur in ordinary and especially partial differential equations when considering far field behaviour for example, and there are known as coordinate expansions. More common is for the solution of an equation to depend on more than one variable, ) say. Often we have a differential equation in the independent variable which contains a small parameter , hence the name parametric expansion. For functions of two variables the obvious generalisation is to allow the coefficients of the asymptotic expansion to b e functions of the second variable: as See examples in notes, and problems. One has to choose the right functions. Nice because it gives error term explicitly, and can often be bounded. Trick of separating integral domain. Failure of integration by parts General rule: Integration by parts will not work if the contribution from one of the limits of integration is much larger than the size of the integral. It can still fail in other cases, if for some reason the terms in the expansion can't be generated by the IBP. as For real. Contributions near global maxima of . For imaginary. Contributions regions of stationary phase (where ). Most general and powerful. For generally complex, and the integral being along a complex contour in general too. splitting the range of integration and using different approximations in each range. See examples Trick I use similar to IBP https://en.wikipedia.org/wiki/Athanasius_Kircher See The Horn of Alexander the Great https://web.stanford.edu/group/kircher/cgi-bin/site/?page_id=517 Speaking tubes connected to statues
Hydraulic organ
http://machinamenta.blogspot.co.uk/2013/07/athanasius-kircher.html One of the sources I used about Kircher is now available online. Beyond his ideas about organizing knowledge and automating art, his books are just a kick to look through. They're almost like illustrations for encyclopedia articles, but then you see a dragon, or a ladder to the center of the earth, or a mountain in the shape of a ma Divisimus altitudinem Turris ad Lunam usque in 5 partes; quatrum unaquæque continet 50 semidiametros globi terreni, 2 semidiametros iuxta distantiam Lunæ proxima a centro terre 52 semidiametrum geocosmi; unde luculenter concluditur globum terrestrem pondere Turris extra centrum motum fuisse tanto spatio quantu est intercapedo inter O et N. Videbis pariter pondus Turris globo terrae M.L. æquilibratum multum excessisse pondus globi terræ. System of subterranean fires
Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. See Oxford course Structure of the periodic table The periodic table is mostly determines by the electronic structure of atoms (see Atomic physics). See also Chemistry There are three rules of thumb, which were discovered phenomenologically (I think), but are justifiable from quantum mechanics: Aufbau principle: Shells should be filled starting with the lowest available energy state. An entire shell is filled before another shell is started. Madelung’s Rule: The energy ordering is from lowest value of to the largest; and when two shells have the same value of , fill the one with the smaller first. Teaching Atomic Structure: Madelung’s and Hund’s Rules in One Chart Standard solution-phase reactions based on reactivities of different sites in molecules, and don't offer control of relative positioning, other than by statistical mechanics. Stereotactic chemical reactions use molecular/supramolecular components to guide the relative positioning of reactive components. It has been demonstrated using tip-based methods to juxtapose reactive molecules, or to remove hydrogen from hydrogen-passivated silicon (111) surfaces. Ribosomes are natural examples. Method by Turberfield et al uses molecular motors to guide reactive monomers to make polymers with controlled sequence. Very slow for bulk production, but maybe good for research.. Nano 3D printer scheme for APM. Talk at Martin School on Jan 2016 Talk by Merkle. Mechanosynthesis, etc. Structural DNA nanotechnology for APM: Mechanical design of DNA nanostructures "As the applications of DNA nanotechnology expand, a consideration of their mechanical behavior is becoming essential to understand how these structures will respond to physical interactions. " Direct Design of an Energy Landscape with Bistable DNA Origami Mechanisms "Recently we have demonstrated the possibility of implementing macroscopic engineering design approaches to construct DNA origami mechanisms (DOM) with programmable motion and tunable flexibility. " Artificial molecular machines. Large collection of molecular machines, from the fruitful field of supramolecular chemistry, mainly. See book "molecular machines" by Ross Kelly. Artificial molecular machines (2000) Artificial molecular-level machines. Light powered molecular machines. http://nextbigfuture.com/2011/03/philip-moriarty-discusses.html http://www.softmachines.org/wordpress/?p=205 http://www.nottingham.ac.uk/~ppzstm/research.php http://www.molecularassembler.com/Nanofactory/ More molecular machines More animated simulations from NanoEngineer here Atomistic Design and Simulations of Nanoscale Machines and Assembly http://www.wag.caltech.edu/gallery/gallery_nanotec.html https://www.cgl.ucsf.edu/chimera/data/smart-team-jan2009/smart.html http://www.imm.org/research/parts/ Chimera molecular modelling software system. Nice https://www.cgl.ucsf.edu/chimera/data/smart-team-jan2009/smart.html Diamond mechanosynthesis: http://www.molecularassembler.com/ Some examples from Nature Books: "Nanosystems" by Eric Drexler (1992) "molecular machines" by Ross Kelly (2005) "Nanoelectronics and Nanosystems" Karl Goser et al. (2004) Adenosine triphosphate Molecule that stores chemical energy for the Cell. Produced by Cellular respiration Adenine + Ribose + 3 phospate groups ATP hydrolyses to ADP and phosphate, by unbonding one of the phosphate groups, and releasing eneryg. METEOR IS GOING TO STOP SUPPORTING FREE METEOR.COM HOSTING, CHANGE TO HEROKU OR SOMETHING Get examples from here: https://brilliant.org/ https://keep.google.com/u/0/#search/text=augmath Automatic simplification is kept to a minimum, to allow notation tricks used in practice for manipulation. I the future, a setting for which level of auto simplification is desired should be added. It's hard to practice defensive programming when you are trying to give users so much freedom. It's interesting how having several different representations of the math (in the latex, the math tree, and the html), I think we can be more efficient by seizing the most appropiate one for each task (like checking some property) Parsing AugMath at the moment does parsing, but without much validation. Inputing maths http://mathdox.org/formulaeditor/ Check this!! MathQuill Slack channel: https://mathquill.slack.com/messages/mathquill/ See also their website Displaying maths KaTeX, MathJax http://docs.mathjax.org/en/latest/advanced/extension-writing.html https://github.com/mathjax/MathJax-third-party-extensions/tree/master/physics Geometry software http://www.cinderella.de/tiki-index.php Mathematical document Handwritten math recognition: https://www.facebook.com/groups/hackathonhackers/permalink/1265209943534488/ Other mathematical software http://www.matracas.org/sentido/ See stuff in GKeep and KTreeTop in Dropbox http://cognitivemedium.com/emm/emm.html Related to Theory of computation. Automata theory is the study of abstract machines or automata, as well as the computational problems that can be solved using them An automaton (plural: automata or automatons) is a self-operating machine, or a machine or control mechanism designed to follow automatically a predetermined sequence of operations, or respond to predetermined instructions. Automata include finite-state machines , etc. Discrete dynamical system (e.g., networks of automata) Finite-state transducer, a FSM with output from transitions. Symbolic dynamics, Discrete dynamical system with output from states visited. Finite-state machine + infinite data structure For instance a Boolean network See Formal language Computer - Theory of Automata, Formal Languages and Computation a new approach to formal language theory by kolmogorov complexity http://www.eecs.wsu.edu/~ananth/CptS317/Lectures/IntroToAutomataTheory.pdf Automata, Computability, and ComplexityOr, Great Ideas in Theoretical Computer Science Spring, 2010 Grail: finite automata and regular expressions FAdo
Symbolic Manipulation of Code Properties
FAdo Documentation http://fado.dcc.fc.up.pt/software/ Build your own finite transducer: http://examples.mikemccandless.com/fst.py?terms=pepe%2F33%0D%0Amoth%2F1%0D%0Apop%2F2%0D%0Astar%2F3%0D%0Astop%2F4%0D%0Atop%2F5%0D%0A&cmd=Build+it%21 A computable class of Descriptional complexity measures, based on automata Automatic complexity of strings smallest number of states of a DFA (deterministic finite automaton) that acceptsxand does not accept any other string of length |x|. Note that a DFA recognizing the singleton language {x} always needs |x|+1 states, which is the reason the definition considers only strings of length |x|. Automaticity I: Properties of a Measure of Descriptional Complexity is an analogous descriptional complexity measure as Automatic complexity but for languages. The finite-state dimension is defined in terms of computations of finite transducers on infinite sequences, Entropy rates and finite-state dimension Finite-state dimension and real arithmetic ☆ Newest measure in this area. Paper: http://www.sciencedirect.com/science/article/pii/S0304397511005408 Finite state complexity defines smallest length of input that will produce result under finite transducer (finite state machine with output basically, which i think can describe the GP maps). Then we can apply Ards argument of how many ways of fitting this shortest string in the fixed-length input of interest (say the genotype). This could be the beginning of the formal theory we need! We probably would also want to develop a concept of algorithmic probability (like Salomonoff's) for finite state machines. Finite-State Complexity and Randomness Finite-State Complexity and the Size of Transducers Finite state transducer Finite model theory Others Approximating the smallest grammar: Kolmogorov complexity in natural models. However, the model allows the advice strings to be over an arbitrary alphabet with no penalty in terms of complexity and, as observed in [8], consequently the NFAs used for compression can always be assumed to consist of only one state... (so not very good measure). Frameworks node.js Hosting Nice easy tutorial to deploy meteor apps on DigitalOcean Domains Setting up DNS records for github pages Remember DNS nameservers are servers that contain DNS records connecting domain names mapped to IP addresses (and more complicated things too). Domain registrars, which let you manage domains you own may offer their own DNS nameservers, or may offer you the capability to use a third-party name server (like Namecheap's FreeDNS) to direct domain names to IPs. Logstalgia: visualization of http request to a server Derivation of the backwards FP equation for the survival probability, solutions using Laplace transform Expected number of times I get a certain outcome for a set of random variables with the same sample space, but potentially different and dependent probability distributions Imagine I have two random variables ( and ) each of which can have value or . Imagine I want to know the expected number of As I get. This will be:
And this result works whether and are independent random variables or not. The only thing we require is that getting and are mutually exclusive (and similarly for ). Inclusion-exclusion principle https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle Set of all states that evolve into a given attractor, under some dynamics See also Genotype-phenotype map, as state-attractor map, can be considered as an example of a GP map. Neutral network sizes of biological RNA molecules can be computed and are not atypically small Structural analysis of high-dimensional basins of attraction
https://en.wikipedia.org/wiki/Behavioural_sciences The study of the behaviour of animals and humans, with a focus on individual behaviour. For the study of collective behaviour see Social sciences. Portal:Contents/Religion and belief systems A belief system can refer to a Religion or a world view, i.e. a framework of ideas and beliefs through which an individual interprets the world and interacts in it. Wikipedia:Portal/Directory/Philosophy, religion, and spirituality See Measures and metrics for networks Measures the extent to which a node (or edge, or other substructure) lies on paths between other vertices. These paths can be defined in many ways, but often they are taken to be geodesic paths. This is a measure of importance because imagine nodes in the network are sending messages between them, we could be interested how often these messages pass through certain nodes or edges under certain assumptions (like that they follow geodesic paths). Vertices with high betweeness but ranking low on other centrality measures can be for example vertices that connect two barely connected "components". Vertices like this are called brokers in socilogical literature. If we use the geodesic node betweeness the definition is: where is the number of geodesic paths between j & n that traverse i. is the total number of geodesic paths between j & n. For directed, same but take direction of paths into account... Can also define geodesic edge betweeness in similar fashion: with obvious generalization of quantities. This is useful for example in road traffic analysis where we are interested in roads not in junctions. Some problems with robustness: Another extension is flow betweeness which is defined as the amount of flow through vertex i when the maximum flow is transmitted from s to t, summed over pairs s and t in the network. To see more about flow see Independent paths, connectivity, and cut sets (Graph theory). The problem with this definition is that it sometimes doesn't give a unique answer because the same maximum flow can be achieved using different choices of independent paths. The usual definition is then to define the flow betweeness to be the maximum value that this number can take. This still has some disadvantages because it doesn't take into account all paths, because it assumes paths are somehow optimal (although in different ways. A variant that does take all paths into account is the random-walk betweeness defined as the expected number of time a vertex is crossed by an absorbing random walk between nodes s and t, summed over these pairs. Article by Borgatti[51] draws together many of the possibilities into a general framework for betweeness measures. The effect found in many Genotype-phenotype maps by which some phenotypes have many more corresponding genotypes, than other phenotypes. This effect is important in Evolution See MMathPhys oral presentation –The structure of the genotype–phenotype map strongly constrains the evolution of non-coding RNA pdf. Notes on the RNA GP map bias paper https://en.wikipedia.org/wiki/Bio-inspired_computing genetic algorithms (Evolutionary computing) ↔ evolution biodegradability prediction ↔ biodegradation cellular automata ↔ life emergent systems ↔ ants, termites, bees, wasps neural networks ↔ the brain artificial life ↔ life artificial immune systems ↔ immune system rendering (computer graphics) ↔ patterning
and rendering of animal skins, bird feathers, mollusk shells and bacterial colonies Lindenmayer systems ↔ plant structures communication networks and protocols ↔ epidemiology and the spread of disease membrane computers ↔ intra-membrane molecular processes in the living cell excitable media ↔ forest fires, "the wave", heart conditions, axons, etc. sensor networks ↔ sensory organs Finite populations induce metastability in evolutionary search ☆ aka phylogenetic tree See Taxonomy Scientists Unveil New ‘Tree of Life’ Complex archaea that bridge the gap between prokaryotes and eukaryotes All 2.3 Million Species Are Mapped into a Single Circle of Life The origin of species by Charles Darwin Global Biodiversity Information Facility Charles Darwin's Beagle library On the experimental evolution of bet-hedging A Natural history of the sense by Diane Ackerman Simon Hampton-Essential Evolutionary Psychology The study of life, that includes the most Complex systems known. Systems biology. These levels form a nice hierarchy, but of course interact with and influence each other in crucial ways. add links to children nodes here too. and organize more, etc. National Center for Biotechnology Information
books and Biochemistry. Using statistics, etc. https://www.khanacademy.org/science/biology A.k.a. healthcare sciences or health sciences
https://en.wikipedia.org/wiki/Outline_of_health_sciences Bipartite Networks have two kinds of nodes, and only connections between unlike nodes. The equivalent of the adjacency matrix is the incidence matrix, It can be converted into a unipartite network by a one-mode projection where two vertices are connected if they both have a connection to the same vertex of the other group (we could improve this by adding a weight: the number of those vertices (groups) they have in common). This projection generally results in an union of cliques, i.e. completely connected components. The adjacency matrix of the projection is (after we remove the diagonal components) . One can also have directed bipartite networks (as in Metabolic Networks), and weighted bipartite networks. Hypergraphs can be represented as bipartite Networks. This is done by mapping the different relations in the hypergraph to a second type of node to which the original nodes can belong by being connected by an edge. The block decomposition method (BDM) is an extension of the Coding theorem method to measure the complexity of -dimensional arrays. As a Network can be expressed via its Adjacency matrix, which is a 2D array, it can be used to measure Network complexity as well. The measure (which we also call BDM) of complexity of array is defined as: where is the set with elements obtained when decomposing the array into non-overlapping subarrays of side length . is one unique square, and is its multiplicity (number of times it appears). refers to the estimate of Kolmogorov complexity used in the Coding theorem method. However, for -dimensional arrays, one uses -dimensional Turing machines, or Turmites. Note that is the number of bits needed to specify the number . In the original paper, a set of 2-dimensional Turing machines was executed to produce all square arrays of size d = 4. This is why
the BDM is needed in order to decompose objects of larger size into objects for which its Kolmogorov complexity has been
estimated. The order of the graph nodes in the adjacency matrix is relevant for
the complexity retrieved by the BDM. This is especially important in highly symmetrical graphs. In estimating complexity,
it is reasonable to consider that the complexity of a graph corresponds to the lowest value of all permutations of the
adjacency matrix, as the shortest program generating the simplest adjacency matrix is the shortest program generating
the graph. The chief advantage of a normalised measure is that it enables
a comparison among objects of different sizes without allowing the size to dominate the measure. MaxBDM is calculated approximately, as described in the paper. An online implementation and code can be found here A Boolean algebra is an Algebraic structure that models the relations between elements which can be either true or false. It is important in Mathematical logic and in Computer science. It has the structure of an orthocomplemented, distributed Lattice (algebraic structure). See Dynamical system and Network theory https://en.wikipedia.org/wiki/Boolean_network Phase Transitions in Two-Dimensional Kauffman Cellular Automata Phase transition in cellular random Boolean nets Kauffman's NK Boolean networks. "Because the regulatory structures at the edge of chaos (K ~ 2) ensure both stability [robustness] and evolutionary improvements [evolvability], they could provide the background conditions for an evolution of genetic cybernetic systems." See Relations between the stability of Boolean networks and percolation Random Networks of Automata: A Simple Annealed Approximation Stability in Boolean networks and cellular automata Boolean network models of gene regulation and signal transduction See also Artificial neural network http://research.microsoft.com/en-us/um/people/holroyd/boot/ An "infection" process in which nodes become infected if sufficiently many of their neighbors are infected. Related to the Centola-Many threshold model for social contagions. Bootstrap percolation on spatial networks (see Spatial networks). Bootstrap Percolation - MathWorld A Sigma-algebra, , on a set, , defined as: i.e. the sigma-algebra generated by , which is the set:{all the open sets of }, i.e. the topology on . It is the smallest sigma-algebra that contains . See here A Borel measure, is just a Measure on a Borel -algebra. Specifying such a measure is simplified by the Caratheodory extension theorem, that says that to See Tree of life Erodium cicutarium Aguja de pastor. Their achenes curl upon drying (and also when I sqweezed it with my fingers, probably because humidity.). Seed launch is accomplished using a spring mechanism powered by shape changes as the fruits dry. The spiral shape of the awn can unwind during daily changes in humidity, leading to self-burial of the seeds once they are on the ground. The two tasks (springy launch and self-burial) are accomplished with the same tissue (the awn), which is hygroscopically active and warps upon wetting and also gives rise to the draggy hairs on the awn. Pepinillos del diablo Boundaries can steer active Janus spheres. Looks at Catalytic conductor-insulator Janus swimmers. Note that the method of mirror images used in the paper for estimating effects of some rotational diffusion quenching mechanisms is not the same as used for electrostatic charges near a conductor. It is in fact an instance of the method of mirror images, as applied to Diffusion equations, where the image is used to satisfy the no-slip boundary condition in the current of ions. As the current satisfies from Ohm's law, the effect on currents should have an accompanying effect on the Electric field. Brownian motion Brownian Motion: Langevin Equation Biased random walk, probability distribution is Binomial Limits in time variable A discrete space-time random walk has a standard deviation in position that is proportional to square root of number of steps: Clearly if we want to stay finite for a finite , we want to stay finite, and we get in continuous limit. We also get non-differentiable paths as . Random walk on 2D square lattice. Combinatorics get harder Solving random walk diffusion on a finite domain with different boundary conditions Polya's recurrence theorem for random walks See also probability distribution for random walk (same as for polymer) [For example here or in Soft Matter Physics notes. The probability density at origin goes like (normalization of Gaussian). One can then sum over all possible lengths of time (i.e. s) and get the expected number of times one returns to (a neighbourhood of) the origin (See Note 1 in Probability theory for why). For , this is , while it's finite for . This can be interpreted for a polymer as it being "dense" or "sparse", as summing over we are asking the question how many monomers of our very long polymer are close to a given point (say the origin)? One can also find probability of ever coming back, and this can be related to the expected number of times to come back. This can also be derived heuristically for the asymptotic limit of large times. First passage time: First passage time calculation using generating functions.. The generating functions also give the {survival probability}, which is the same as {probability of ever coming back}. Continuous space-time limit from discrete random walk Diffusion If continuous space and continuous time: Diffusion equation Can also have continuous space, and discrete time, although not often used. Phenomenological derivation of Diffusion equation
Use Fick's laws of diffusion, and Einstein–Smoluchowski relation Eistein's original derivation from Chapman-Kolmogorov equation, as Brownian motion is assumed to be a Markov process The Fokker-Planck equation has a stationary solution, for a biased periodic potential: A Brownian ratchet occurs when the potential is asymmetric. A particularly nice example is the sawtooth potential, in which the above equation gives: for the first site. We find three regimes: The drift velocity is: which plotted looks like: This shows an asymmetry between the positive and negative force regions, similar to that shown in the current-voltage curve of a diode: In fact ratchets and diodes are very analogous, and ratchets, including Brownian ratchets can be thought of as mechanical diodes. In fact, the famous Feynman Brownian ratchet paradox was formulated by Brillouin in terms of a diode rectifier (Brillouin paradox) This rectification in Brownian ratchets can be used as a basis for fluctuation-driven transport, which is a proposed mechanism for molecular motors. See here An example of this is in the tilting ratchet, in which the bias used above, oscillates. Flashing ratchet Another example of a Brownian ratchet is when the potential itself oscillates (fluctuations between a low and a high potential). A special case has the potential turning stochastically between two states, this is known as the flashing ratchet, if one of the two states has no potential, this is called the on-off ratchet. A way to solve for the probability in the stochastically flashing ratchet is to add a new label to the probability representing one of the two states of the potential landscape, call them and . Then we get a Fokker-Planck/Master equation for our continuous (space labelled by ) and discrete configuration space: One can the workout the evolution equation for the total probability of being at position , and it turns out to have the form of a Fokker-Planck equation with an effective potential.
Bulk matter refers to a piece of matter composed of sufficiently many building blocks (elementary particles, atoms, molecules, ...) in such a way that a simple statistical description is appropriate, an bulk properties like temperature, Viscosity and elasticity, etc can be defined. These are also called material properties or macroscopic properties. Bulk matter can be either composed of a single phase or a mixture of phases; see for instance dispersions. I use the term "form" of matter to refer to a particular type of bulk matter: either a single phase or a mixture of phases. I also use the world material (see Materials science) mostly to refer to a particular form of matter. I think that "phase" is sometimes used more widely in the same sense as I use the term "form". When is a system bulk matter? Note that many pieces of matter are formed by components interacting in complicated ways, in such a way that a simple statistical description does not appropriately describe its behaviour for many purposes; for instance, a computer, or a cell. These are in most situations not considered as bulk matter, and should be treated as Complex systems instead. However, whether a statistical description is appropriate, and therefore whether they are considered bulk matter, really depends on the problem, and so for some problems, these systems can be considered bulk matter (for instance, when studying the overall mechanical strength of a computer system). From here on, "matter" refers to bulk matter, unless otherwise specified. Bulk matter can be classified depending on whether the phase (see Condensed matter physics for more detailed explanation): The Mechanics of most classical types of bulk matter can be macroscopically described via Continuum mechanics, which describes matter in terms of continuum equations, based on space-time varying fields that evolve according to Differential equations. This is the foundational theory used in Mechanical engineering, and related areas. Continuum equations are also widely used in physics, particularly in Solid mechanics, Fluid mechanics, Soft matter physics, Astrophysics, rheology, etc. Rheology is a branch of continuum mechanics that studies the flow of matter, primarily in a liquid state, but also as 'soft solids' or solids under conditions in which they respond with plastic flow rather than deforming elastically in response to an applied force. That is, rheology does not study a particular class of bulk matter, but the flow of any bulk matter. Thermodynamics is the classical theory describing the flow of heat through matter. It is often combined with continuum mechanics to explain phenomena such as convection. There are many more complex phases of matter particularly in soft condensed matter, that go beyond those simply described by continuum equations (although these are still very useful in many of these). Description of these often needs more advanced ideas from Statistical physics. The microscopic study of matter, as done for example in Quantum condensed matter physics and Statistical physics, also goes beyond phenomenological and macroscopic descriptions of classical physics and tries to derive materials properties from microscopic physics. Note busy beavers are often defined just for Turing machines on an input tape which is initially blank. Applications in Coding theorem method The ``Clockwise/Spiral Rule'' for parsing C variable declarations! Random numbers and probability distributions in C++ rand-Considered-Harmful .. New functions in C++ See minute 15, for example. Uses this library: http://www.cplusplus.com/reference/random/. rand() considered deprecated for most uses.. References are nothing but constant pointers in C++ (see here). See Lynda.com videos. Calabi–Yau manifold. is a special type of manifold that is described in certain branches of mathematics such as algebraic geometry. The Calabi–Yau manifold's properties, such as Ricci flatness, also yield applications in theoretical physics. Particularly in superstring theory, the extra dimensions of spacetime are sometimes conjectured to take the form of a 6-dimensional Calabi–Yau manifold, which led to the idea of mirror symmetry. String theory postulates 10 dimensions. The 6 dimensions have to be small (compactified) so that the space is approximately 4-dimensional. However the shape of the extra 6-dimensions determines the laws of physics (the fundamental laws of particles). The problem, I think, is that it's hard to relate the two, and also that there are so many candidate manifolds.. The portion of allocated memory of a process, where local variables from functions that are being executed are stored. https://www.cs.umd.edu/class/sum2003/cmsc311/Notes/Mips/stack.html Note the code of the functions themselves is stored in the text section of the allocated memory, the stack stores the local variables that the function is using, as well as some other things, like function arguments, and return addresses. The lifo property of the stack allows the easy implementation of recursive function calls. If the amount of space taken by the stack goes over a certain set limit, we get an stack overflow. A Geological period of the History of Earth that marks the beginning of the Phanerozoic era, the animal era. To specify a measure on a Sigma-algebra it suffices to specify it on an Algebra (algebraic structure). The measure then extends to the sigma algebra generated by that algebra, i.e. to the smallest sigma-algebra containing that algebra. The generation can be done by starting with the algebra and taking intersections and unions, so that it satisfies the axioms of a sigma-algebra. The generated sigma-algebra is unique, if the underlying measure is sigma-finite.. A carbohydrate is a biological molecule consisting of carbon (C), hydrogen (H) and oxygen (O) atoms The term is most common in biochemistry, where it is a synonym of saccharide, a group that includes sugars, starch, and cellulose. The saccharides are divided into four chemical groups: The Cartesian product of a collection of copies of a Set. For instance the Cartesian square is . https://en.wikipedia.org/wiki/Cartesian_product#Cartesian_power An operation between Sets that gives a new set composed of Tuples of elements from the original sets. A sequence of numbers that arises in Combinatorics, often of objects defined recursively, like trees. See also Analytic combinatorics Category Theory: The Beginner’s Introduction (Lesson 1 Video 1) Relations to Functional programming, lambda calculus, etc. Nucleolus where ribosomes (and some signal recognizing molecules) are created Cytoeskeleton mesh of microtubules and actine filaments along which molecular motors walk Centrosome organizes microtubules. It's where they originate. I think it's like the seed for their self-assembly Golgi apparatus packages proteins into membrane-bound vesicles inside the cell before the vesicles are sent to their destination Endoplsamatic reticulum. Proteins self-assemble inside them. More stuff This is how proteins are pushed inside the endoplasmatic reticuli while the ribosome assembles them: ........ Cell transport is movement of materials across cell membranes. Cell transport includes passive and active transport: Passive transport proceeds through (simple) diffusion, facilitated diffusion and osmosis. Non-equilibrium statistical mechanics: from a paradigmatic model to biological transport Simple diffusion small non-polar molecules, like Facilitated diffusion Large or polar molecules passing through membrane protein channels. Examples of molecules that need protein channels: Like water through aquaporins Requires energy, for e.g. in the form of ATP. https://en.wikipedia.org/wiki/Ion_channel Sodium-potassium pump https://www.brightstorm.com/science/biology/cell-functions-and-processes/cell-transport/ Complex systems, artificial life in Bio-inspired computing. See Dynamical systems on networks, Discrete dynamical systems Classsification of Cellular Automata Computer simulations of cellular automata Equivalence of Cellular Automata to Ising Models and Directed Percolation Phase Transitions of Cellular Automata See Directed percolation Statistical Mechanics of Probabilistic Cellular Automata Universality in Elementary Cellular Automata proves a conjecture made by Stephen Wolfram in 1985, that an elementary one dimensional cellular automaton known as “Rule 110” is capable of universal computation, i.e. it is a Turing machine (see Theory of computation) Statistical mechanics of cellular automata A new kind of science - Stephen Wolfram http://www.paradise.caltech.edu/~cook/papers/index.html Game of Life Cellular Automata Von Neumann cellular automata are the original expression of cellular automata, the development of which were prompted by suggestions made to John von Neumann by his close friend and fellow mathematician Stanislaw Ulam. Their original purpose was to provide insight into the logical requirements for machine self-replication and were used in von Neumann's universal constructor. Codd's cellular automaton designed to recreate the computation- and construction-universality of von Neumann's CA but with fewer states: 8 instead of 29. Langton's loops consist of a loop of cells containing genetic information, which flows continuously around the loop and out along an "arm" (or pseudopod), which will become the daughter loop Nobili cellular automata are a variation of von Neumann cellular automata (vNCA), in which additional states provide means of memory and the interference-free crossing of signal. See also Discrete dynamical systems http://out.coy.cat/?n=1001010010 http://out.coy.cat/?n=nicememe http://out.coy.cat/?n=cate wooow sierpinski http://out.coy.cat/?n=doitforthelulz http://cellularautomata.coy.cat/ http://out.coy.cat/?n=1269489990&s=0 http://out.coy.cat/?n=1269489997&s=0 http://out.coy.cat/?n=1269490006&s=0 http://out.coy.cat/?n=1269490035&s=0 http://out.coy.cat/?n=1269490071&s=0 like the matrix: http://out.coy.cat/rndpat.php?n=1269490075&s=0 The way cells produce energy ATP & Respiration: Crash Course Biology #7 Most of the Biomolecules that gives us energy are processed and end up as Glucose Glucose + 6 Oxygen –> 6 Carbon dioxide + 6 Water + ATP (Energy) Breaking Glucose into two 3-carbon molecules, called Pyruvic acids, or pyruvate molecules. It also uses 2 ATPs, produces 4 ATPs. It also produces NADH Uses many enzymes, like phosphoglucoisomerase. It is an anerobic process, as it doesn't need oxygen. If there isn't oxygen the pyrovates undergo Fermentation. Anaerobic respiration can also produce lactic acid. However, the next steps in cellular respiration are aerobic and require oxygen. Happens inside inner membrane of the Mitochondria Pyruvate molecules >> 2 ATP (per glucose) + Energy First pyruvates are oxydized. One of the three carbons in the chain bonds with 2 oxygens, and leaves as CO2. It leaves a 2-carbon compound called Acetyl coA (Acetyl coenzyme A) Also an NAD+ picks up an H to form NADH ATP Form citric acid, from oxaloacetic acid and the Acetyl coA There's more.... produce NADH and FADH2 On membrane ATP synthase A ceramic is a rigid material that consists of an infinite three-dimensional network of sintered crystalline grains comprising metals bonded to carbon, nitrogen, or oxygen. (IUPAC) Note: The term ceramic generally applies to any class of inorganic, non-metallic product subjected to high temperature during manufacture or use. In Data transmission, the channel capacity is defined as That is, the maximum mutual information of the conditional probability p above, where the maximization is done over possible proabilities of the outputs (or equivalently, probabilities of inputs). One can show this is equal to the maximum rate of information transfer over a channel such that we can recover the information on the ouput with negligible probability of error. Note that changing the probabilities of the inputs can be accomplished by choosing different codes to encode the input. Therefore the channel capacity can be considered to be maximizing over codes. In particular: Channel coding theorem: Long enough code blocks can achieve the channel capacity limits (similar to arguments for understanding entropy by many trials). The capacity is the logarithm of the number of distinguishable input signals. aka noisy-channel coding theorem In information theory, the noisy-channel coding theorem (sometimes Shannon's theorem), establishes that for any given degree of noise contamination of a communication channel, it is possible to communicate discrete data (digital information) nearly error-free up to a computable maximum rate through the channel, called the Channel capacity. In other words, the theorem states that given a noisy channel with channel capacity C and information transmitted at a rate R, then if R<C there exist codes that allow the probability of error at the receiver to be made arbitrarily small. This means that, theoretically, it is possible to transmit information nearly without error at any rate below a limiting rate, C. Long enough code blocks can achieve the channel capacity limits (similar to arguments for understanding entropy by many trials). –> See Oxford course notes Chaotic maps Symbolic dynamics can be used to analyze them Chaotic Nonlinear dynamical systems Sarkovskii's theorem Period 3 implies chaos Every chaotic dynamical system is a fractal-manufacturing machine An Introduction to chaotic dynamical systems. Second edition
The ability to make small organic molecules is at the heart of everything from drug development to the making of new dyes and agricultural chemicals. But ever since the dawn of synthetic organic chemistry in the 1820s, the process has required slow, painstaking effort. Now, however, researchers led by Martin Burke, a chemist at the University of Illinois, Urbana-Champaign, have developed a novel machine that may change all that. The machine automatically synthesizes new small organic molecules by welding together premade building blocks that can be put together in any configuration. Two hundred such building blocks already exist. And thousands of other similar molecules can also be used in the process. As a result, the machine has the ability to make billions of different small organic compounds that can then be tested as new drugs or for other uses. If widely adopted, the synthesis machine could revolutionize organic chemistry, turning it from a slow, painstaking process to a made-for-order business.
Idea for neural network for chemical synethesis and manufacturing etc. Facebook post: https://www.facebook.com/guillermovalleperez/posts/10153853693416223? IUPAC nomenclature page Gold book The Photographic Periodic Table of the Elements John McMurry, Robert C. Fay-Chemistry, 6th Edition-Prentice Hall (2012) (Theilheimer's Synthetic Methods of Organic Chemistry ) Alan F. Finch-S Karger Pub (2001) Random https://en.wikipedia.org/wiki/Hydrolysis http://www.compoundchem.com/2016/05/04/oxidation-reactions-of-alcohols/ Chemotaxis (from chemo- + taxis) is the movement of an organism in response to a chemical stimulus. https://en.wikipedia.org/wiki/Chemotaxis Chromostereopsis is a visual illusion whereby the impression of depth is conveyed in two-dimensional color images, usually of red-blue or red-green colors, but can also be perceived with red-grey or blue-grey images. Such illusions have been reported for over a century and have generally been attributed to some form of chromatic aberration. See Architecture. Relations to society, and societal organization: infrastructure, economy, governance, culture. A civilization is any complex society characterized by urban development, social stratification, symbolic communication forms (typically, Writing systems), and a perceived separation from and domination over the natural environment by a cultural elite. Discriminative Supervised learning, where the output value is discrete, or categorical, or qualitative. No implicit ordering, or closeness on the variables. Many of the same methods as in regression, as problem is quite similar. Support vector machines. Software for SVMs: http://svmlight.joachims.org/ How do you classify data that lies on an infinite dimensional space Artificial neural network (see also Deep learning) Output has a notion of order, but not closeness, so it's qualitative. See MMathPhys oral presentation, Algorithmic information theory Using the coding theorem to estimate the Kolmogorov complexity of short strings. The estimate is defined as: where where is the set of Turing machines with states and letters in the alphabet of the input tape. The Turing machines are fed a blank tape, and whether the program halts is determined using a Busy beaver function. An extension to -dimensional arrays has been developed using the Block decomposition method See this paper and this one For some reason this seems to be a popular idea in Psychology. Using these methods the people at Algorithmic nature group made The Online Algorithmic Complexity Calculator Calculating Kolmogorov Complexity from the Output Frequency Distributions of Small Turing Machines A code is a representation of data, represented as an injective map between two sets. These sets are often called the Source alphabet, and the code alphabet, respectively. Coding theory (and/or coding methods) is the study of codes that satisfy certain properties. These properties are often geared towards Data transmission, Data compression, and other areas in Information theory. Codes that approach the Channel capacity limit imposed by the Channel coding theorem Codes that approach the entropy limit imposed by the Source coding theorem for lossless codes, or the limits imposed by Rate compression theory for lossy codes. http://www.research.ibm.com/cognitive-computing/ That is the promise of cognitive systems–a category of technologies that uses natural language processing and machine learning to enable people and machines to interact more naturally to extend and magnify human expertise and cognition. These systems will learn and interact to provide expert assistance to scientists, engineers, lawyers, and other professionals in a fraction of the time it now takes. See Active colloid, Self-diffusiophoresis, Self-propelled particle. See also Collective hydrodynamics of active entities, Self-assembly of active colloids Dynamic self-organization of motile components can be
observed in a wide range of length scales, from bird flocks (ref) to bacterial colonies (ref, ref) and assemblies of motor and
structural proteins (ref). The fascination with these phenomena has naturally inspired researchers to use a physical under-standing of motility to engineer complex emergent behaviors in model systems that promise revolutionary advances in technological applications if combined with other novel biomimetic functions, such as signal processing and decision making (see Swarm robotics), or replication (see Self-replication of information-bearing nanoscale patterns). Biological components pose inevitable limitations on this task, while chemical [ 14 ], mechanical [ 15 ], or externally actuated [ 16 ] imitations appear more promising Individual and collective behavior of artificial swimmers: "Janus particles" Transport and Collective Dynamics in Suspensions of Confined Swimming Particles Emergent Cometlike Swarming of Optically Driven Thermally Active Colloids Collective behaviour of thermally active colloids. This model doesn't consider the dependence of the interaction on the relative orientation of the colloids. This effect is incorporated in their later model described in the paper on chemotactic colloids, and on optically driven thermally active colloids Clusters, asters, and collective oscillations in chemotactic colloids Behaviour of a single chemotactic colloid in an external substrate concentration gradient Theory of phoretic mechanisms of self-propelled colloids FROM CHEMOTAXIS TO COLLECTIVE MOTION They consider the former in the paper, and look at pairwise interactions. Constructing a Langevin equation using the drift terms derived in here, which depend on note product gradient: gradient in product concentration. Extra terms were added because the coefficients , etc. only take an external gradient into account, and now we also have external gradients produced by the other catalytic colloids. These equations can also be derived phenomenologically following from symmetry principles (see citations in paper), but one doesn't get expressions for the coefficients. The substrate and product fields (, and ) are themselves determined by the distribution of colloid positions and orientations. The substrate is consumed and the product is generated at the rate Evolution equation of concentration fields is obtained depending on averaged colloid number density and orientation density. The steady state is the considered, and the Fourier transform is applied to obtain information on the length scale of the interaction, expressed in the screening length. Saturated vs unsaturated regime. MM curve? Refers to Michaelis-Menten rule in Enzyme kinetics. Saturated and unstaturated regimes refer to regimes where (which has the MM form) is saturated vs unsaturated. These are obtained from the Langevin equations above. For the orientational equations, the averaged equation involves higher moments, and a closure condition needs to be imposed to express them in terms of lower moments (mean field approximation). See Supplementary Material in paper. The equations for and depend on the gradients of and , while the equilibrium and fields depend on and . The two equations can be combined to obtain closed equations for and with complicated effective interactions, which give rise to a rich diversity of possible phases, depending on the several parameters in the model. The main two regimes are: Unsaturated Saturated Formation of asters (i.e. star-like formations, I think)... Collective behaviour of active colloids This model doesn't consider the dependence of the interaction on the relative orientation of the colloids. This effect is incorporated in their later model described in the paper on chemotactic colloids, see here. It is also incorporated on their paper on optically driven thermally active colloids. See below. Thermal interactions via self-generated temperature gradient (via half-coating of a dark absorbing material and laser radiation bathing the sample), and the Soret effect, also known as Thermophoresis Fokker-Planck description Regimes Depend on the Soret number Brownian dynamics simulation of self-thermophoretic colloids. The colloids don't have an intrinsic asymmetry, but there is an asymmetry in their produced temperature fields because of non-uniform illumination of the light-absorbing (dark) colloid. The illumination is assumed to be directed from above downwards, and the effects of shadowing by the colloids above a particular colloid are taken into account using simple geometric optics (as a better optics treatment using light scattering on the particles is computationally very intensive). Comet-like swarms are formed, with interesting dynamic features, like internal circulation of particles in the swarm, evaporation, and ejection of hot particles from the tip. The high-density head region forms a hot core which pulls the tail of the comet along. It also drives thermal and density fluctuations. The particles at the top have the largest self-propelling velocity so that they tend to move up. They are also pulled down by the large hot core (creating large temperature gradients) below them. This interplay of effects cause larger fluctuations than one would expect in an equilibrium system (). Density fluctuations at the swarm tip and temperature fluctuations are intertwined due to the transient appearance of heat sources. The swarm is a long-lived but transient structure; it is subject to a slow leakage that eventually dissolves it. It looses particles linearly with time Velocity of swarm If there are approximately particles in the head, and particles in the tail, then the whole swarm experiences approximately a self-thermophoretic drift velocity (due to the external illumination) of because approximately, only the particles at the head are illuminated. Now the drift for a single particle is where f is the self-thermophoretic force and is the drag coefficient. Now the whole swarm experiences a force . However, because there are particles in the swarm, its effective drag coefficient is times the drag coefficient of a single particle, i.e. . Therefore the drift velocity of the swarm, . The last expression comes from estimating the ratio of the number of colloids in the head and in the tail from its shape as , where is the width (radius perpendicular to the light source), and is the length, or height of the swarm. See Active matter Continuum equations of motions for dense active nematics, such as suspensions of microtubules driven by molecular motors, or dense collections of microswimmers. They are described as nematic Liquid crystals, with an extra term in the term that leads to instability (typical of Non-equilibrium statistical physics), and active turbulence. See Complex fluid dynamics for the dynamical equations of liquid crystals (Beris-Edwards equations). Addition to stress, is also discussed in Complex fluid dynamics. However, Is there an easier way to see this? The contribution to stress from the active colloids is the average value of the stresslet, which for nematic active particles, turns out to be , the active stress, where is a measure of the level of activity. Note: the velocity of the swimmer doesn't appear in the equations because the velocity of the swimmer determines the velocity of the fluid at the first instant when the swimmers start, and from there on, the velocity of the fluid=the velocity of the swimmer, and it just evolves according to the stresses described on the paper and here. So yeah the fact that they actually swim, only sets up their initial velocities, and from there on, they are equivalent to {rods with symmetric thrust, say two thrusters, one at each end}. However, for other active nematics, like Molecular motors+Microtubules mixtures, the "symmetric pusher" model is reasonable even at the short accelerating phase. Therefore because the RHS of the momentum equation has , changes in the direction of orientation of the nematic order induces flow. From these considerations and looking at the induced flows, one can already find two examples of instabilities: Also activity can stabilize or de-stabilize nematic ordering, depending on the kind of activity and shape of particles: These can be understood from the pictures in Figure 7 in article, reproduced below: Fluctuating hydrodynamics and microrheology of a dilute suspension of swimming bacteria A colloid is most often used to refer to either: Colloidal: State of subdivision such that the molecules or polymolecular particles dispersed in a medium have at least one dimension between approximately 1 nm and 1 μm, or that in a system discontinuities are found at distances of that order. (IUPAC) Colloid: Short synonym for colloidal system. (IUPAC) A colloidal dispersion is a system in which particles of colloidal size of any nature (e.g. solid, liquid or gas) are dispersed in a continuous phase of a different composition (or state). The "name dispersed" phase for the particles should be used only if they have essentially the properties of a bulk phase of the same composition. (http://goldbook.iupac.org/C01174.html) Branch of Physics dealing with physical properties of colloidal systems (i.e. motion, forces, etc. at the scale of the colloidal system). See book by Hunter - Foundations of colloid science
Branch of Physics dealing with physical properties of colloidal systems (i.e. motion, forces, etc. at the scale of the colloidal system). The branch of soft matter dealing with colloids has a close connection with the other subjects of Condensed matter physics, like Solid-state physics. This is because colloidal Suspensions in many ways can behave analogously to solids, whether crystalline or glassy. Colloidal particles have also been used as model systems for atoms or molecules, and so there are some connections with Atomic physics and Molecular physics. These are important in Active matter (see Active colloid), in Biophysics, and Nanotechnology. Defined precisely, genotypic robustness is the fraction of neutral mutations per genotype, and genotypic evolvability is the number of distinct phenotypes that are within one mutation of the genotype (and are not the same phenotype as that of the genotype). By contrast, phenotypic robustness is defined as the average fraction of neutral mutations per genotype across a given phenotype. This correlates positively with phenotypic evolvability, defined as the total number of distinct other phenotypes that are within one mutation of any of the genotypes belonging to the given phenotype. A communication channel is a system in which the output depends probabilistically on the input. The probability transition matrix, for a given channel is the conditional probability of output X given input Y, p(X;Y). A communication system, as studied in Communication theory is specified by: See Information theory, Data transmission Communication theory studies the properties of Communication systems Source-channel separation theorem Properties of information source Properties of data transmission system Properties of destination
A Topological space is compact if every Filter base on has an accumulation point. That is, there exists such that for all , for all , . An alternative, well-known definition involves properties of ‘coverings’ of X by families of open sets. Study of similarities and differences between the anatomy of different organisms Comparative Anatomy: What Makes Us Animals - Crash Course Biology #21 https://en.wikipedia.org/wiki/Complex_dynamics http://www.math.harvard.edu/~ctm/papers/home/text/papers/real/book.pdf Related to Fractals and Complex systems Complex analytic dynamics on the Riemann sphere Complex Dynamics: Families and Friends Attracting Domain of a Dynamical System: A Complex Dance by Sara Lapan her webpage with some notes also on Algebraic geometry. Complex fluids are fluids with elements (mostly objects suspended in the fluid), whose dynamics couple with the fluid's dynamics, giving a more complex overall behaviour (see wiki page). Most important types are dispersions, so that they are composed of two coexisting phases. The main types are: See Active matter, for the interesting and important type of complex fluid, composed of active or driven elements. Notes from Paul Dellar's course
his website • Low Reynolds number hydrodynamics, general mathematical results, flow past a sphere. Stresses due to suspended rigid particles. Calculation of the Einstein viscosity for a dilute suspension • Stresses due to Hookean dumb-bells. Derivation of the upper convected Maxwell model for a viscoelastic fluid. Properties of such fluids • Suspensions of orientable particles, Jefferys model, very brief introduction to active suspensions and liquid crystals Classical models for nematodynamics, dynamics of nematic liquid crystals: Doi theory? See also Soft matter physics notes See Beris A.N. and Edwards B.J., Thermodynamics of Flowing Systems (Oxford University Press) 1994., and I think Doi also has a book on this. See also here, and here. Beris-Edwards equations Contiuum equations of motion of nematic Liquid crystals, in terms of the tensorial order parameter . See The Hydrodynamics of Active Systems. A physical introduction to suspension dynamics - Guazzelli, Morris Fluid dynamics of fluid with suspended particles. The suspended particles will (after a short transient) follow the fluid in its translation, and rotation. However, they can't follow it in its strain deformation. Therefore the strain component of the externally imposed flow finds resistance in the suspended particles (spheres for example), and this resistance means the particles disturb the flow. Because the flow determines the stress tensor, they will affect the stress tensor. In particular, the way the suspended sphere will affect the stress tensor is encoded in the stresslet. Einstein derived the Einstein viscosity through dissipation arguments. Part of these is also found in the book. Note that the dissipation is basically the integral of the stress times the strain rate , and is derived in Chapter 1. There are steps in the derivation, that I don't yet quite follow https://en.wikipedia.org/wiki/Complex_geometry I think it has applications to String theory.. A complex system is a high-dimensional systems where the variables are strongly interdependent. See Complexity theory for more discussion on the definition of a complex system. Complex Systems (Mathematics Course at Oxford) blog https://en.wikipedia.org/wiki/Complex_system
https://en.wikipedia.org/wiki/Complex_systems Features of complex systems: Self-organization and emergence. Evolution. Adaptation. Homeostasis, autopesis. Crowd-dynamics, chaos, order and disorder.. Related: Soft matter physics. Non-equilibrium statistical physics Models and examples: Boolean network, Automata, Cellular automata, Biology, Artificial chemistry, Fractals. Control theory and control systems, Nonlinear systems, Networks (in particular Dynamical systems on networks), Social system, Percolation, Self-organized criticality, Agent-based models Maybe try to categorize these a bit. Complex Networks and Energy Landscapes See Network theory. ChaosBook.org videos
also here
YB channel Power-law Distributions in Empirical Data Statistical physics of social dynamics Part 1 Symbolic Dynamics and One-dimesional Cellular Automata: an Introduction Лекториум http://www.complex-systems.com/ https://theory.org/complexity/ https://en.wikipedia.org/wiki/Homeostasis https://www.youtube.com/user/StanfordComplexity/feed Computation, Dynamics and the Phase-Transition http://www.maths.qmul.ac.uk/research/applied ABDUS SALAM MEMORIAL LECTURE SERIES Instituto de Física Interdisciplinar y Sistemas Complejos (IFISC) Researchers in complex systems
or here actually How do I explain to non-mathematical people what "non-linear and complex systems" mean? Computational Methods for Nonlinear Systems See Complexity theory for definition and theory. See Complex systems for examples and applications. Complexity is a general concept that has different meanings in different contexts.
For instance, complexity is related to “incompressibility” in information theory
and computer science. In dynamical systems, complexity is usually measured by
the topological entropy and reflects roughly speaking, the proliferation of periodic
orbits with ever longer periods or the number of orbits that can be distinguished
with increasing precision. In physics, the label “complex” is in principle attached to
any nonlinear system whose numerical solutions exhibit a chaotic behavior. Neurologists claim that the human brain is the most complex system in the solar system,
while entomologists teach us the baffling complexity of some insect societies. The
list could be enlarged with examples from geometry, management science, commu-
nication and social networks, etc. from book on Permutation complexity by Amigo Good review: RANDOMNESS, INFORMATION, AND COMPLEXITY Information and Complexity Measures in Dynamical Systems See also Information theory, Statistical physics, Dynamical systems, Evolution, Simplicity bias. See Descriptional complexity, Data compression Some measures of descriptional complexity are based on Data compression techniques, like the Lempel-Ziv complexity. Relations to Grammar-based compressions used in Data compression Application of Lempel–Ziv factorization to the approximation of grammar-based compression Relations between LZ-factorizations and grammar-based factorizations (G-factorization). The G-factorization gives an upper bound for the LZ complexity. See this book too (same article), and this: Grammar Compression, LZ-Encodings, and String Algorithms with Implicit Input. Complexity is generally used to characterize something with many parts where those parts interact with each other in multiple ways. Etimologically, complex refers to a system made of many intertwined parts, and that's still the definition we use in science, although a precise measure hasn't been agreed upon. See Complexity. But how intertwined, i.e. how many and what kind of the interactions does a system need to be called complex? I think a complex system should be defined as one in which the interactions significantly alter the behaviour of the system, relative to the one with no interactions. The primary example of interactions that qualitatively affect the behavior of a system are nonlinear interactions (see Nonlinear systems). Definition by Cosma Shalizi: a complex system is a high-dimensional systems where the variables are strongly interdependent. Complex systems are ones with a large effective number of strongly-interdependent variables. This excludes both low-dimensional systems, and high-dimensional ones where the variables are either independent, or so strongly coupled that only a few variables effectively determine all the rest. Since the 1980s, an interdisciplinary movement of physicists, mathematicians, economists, computer scientists, biologists, anthropologists and other scientists has explored techniques for modeling a broad range of such systems, and their common features and inter-connections. These techniques rely heavily on intensive, sophisticated computer simulations, and notions of information, search and adaptation feature prominently in the theories. (The Statistical Analysis of Complex Systems Models) See also: How do I explain to non-mathematical people what "non-linear and complex systems" mean? Furthermore, Warren Weaver posited in 1948 two forms of complexity: The way I interpret this, is that organized or disorganized refers to the behaviour of the system, at some scale and coarse-graining level. If the system at some coarse-graining level has a behaviour that could be described by a less complex system (for example, as formalized by Kolmogorov complexity in AIT) than the original description, we say it displays organized complexity, and that new simpler behavior has emerged (see Self-organization). This may also be called complexity reduction. One can see that coarse-graining will produce less complex descriptions, pretty much by definition. However, to get emergence, the system must allow some coarse-graining procedure that produces reasonable descriptions, in the first place Disorganized complexity refers to some scale which does not allow a simpler coarse-grained description. For example, a gas of particles represent a complex system (as the particles interact with each other in complex ways, i.e. ways that change the behavior of the system significantly relative to a system of non-interacting particles). At the scale of particles, we have disorganized complexity, as there is no coarse-grained description that can simplify the dynamics while still talking of all the particles. We may then use probabilistic descriptions. At larger scales, we can talk about large groups of particles, and using, for instance averages from the probabilistic descriptions, we can construct coarse-grained descriptions in terms of "infinitesimal" volume elements interacting on less complex ways. We can say that that "hydrodynamic behvaior has emerged". Complexity and Self-organization Universality-Complexity Classes for Partial Differential Equation Systems (from xmorphia) Taking ideas of universality and complexity classes of cellular automata from Stephen Wolfram (c.f. A New Kind of Science). https://en.wikipedia.org/wiki/Complexity_theory Kolmogorov Complexity – A Primer The First Law of Complexodynamics Well the complexity follows that pattern in the macroscale at least. Also: Non-equilibrium is more complex; I think: because equilibrium can be described simply: the long time behaviour of the simple dynamical system; while non-eq has many more possibilities https://jeremykun.com/2012/04/21/kolmogorov-complexity-a-primer/ See also the related: Computational complexity, and also Descriptional complexity, and Complex systems. Complexity theory may be seen as part of complexity science, or they may be seen as equivalent disciplines. In any case, this page includes complexity science. http://www.complexity.ecs.soton.ac.uk/ People http://turing.iimas.unam.mx/~cgg/ Norbert Wiener, cybernetics Heinz von Foerster, Second-order cybernetics Francis Heylighen, cyberneticist A component is a subset of the network for which all pair of vertices have at least one path, and which is maximal (i.e no extra nodes can be added that preserve this property). A connected graph has only one component, while a disconnected one has more than one. The adjacency matrix can always be written in block diagonal form with blocks corresponding to components. Components in directed networks Weakly connected components are components of a directed network ignoring the direction. Strongly connected components have a path between any two vertices in both directions. Acyclical graphs can't have strongly connected components with >1 vertex since, they would necessarily include a cycle. Out-components are all the vertices reachable from a certain vertex, including the vertex itself. Both of these are identical for all vertices in a strongly connected component. A formal language is a set of strings of symbols that may be constrained by rules that are specific to it. is the set of strings formed by symbols in the set . From Naïve Set Theory - Cardinality & Basic Computability Theory: Definition 1.2.1. A one-way infinite, 2-tape Turing Machine is.... A configuration of the Turing machine consists of the state, the contents of the 2 tapes, and the position of the tape heads. An input string is said to be accepted by a Turing machine if, the computation of with initial configuration having on the first tape and both heads at the left end of , terminates in , the accepting state. The machine is said to reject the string if the Turing machine terminates in , the rejecting state. (There is of course, the possibility that the Turing Machine may not terminate its execution.) a Turing machine is said to accept a language L if every string x in the language is accepted by the Turing Machine in the above sense, and no other string is accepted A language L is said to be decidable if both and are acceptable. Definition 1.2.2. A language is said to be acceptable if there is a Turing machine which accepts it. Definition of computability I suppose means unbounded.. Useful definitions: bit-doubling function, pairing function. The pairing function is a prefix code - that is, the encoding of a pair cannot be the prefix of the encoding of another pair. See Prefix code. This makes the code uniquely decodable: a pair can be identified without requiring a special marker between pairs. Theorem 1.2.10: A language is computably enumerable if and only if it is acceptable. Theorem 1.2.11: A language is decidable if and only if it is computably enumerable in increasing order. That is, a language is decidable if and only if it is finite or there is a total computable bijection such that for all numbers , Theorem 1.2.12. Every infinite computably enumerable set contains an infinite decidable set. See Computational Complexity problem sheet solutions offline version. Also see these notes on Kolmogorov complexity, for proof of Theorem 1.2.12. and more. Universality theorem: There is a universal Turing machine. Kleene's normal form theorem. There is a 3-ary partial computable function and a 1-ary partial computable function such that any 1-ary partial recursive function can be expressed as Theorem 1.2.15 There is a partial computable function that is not total computable. Create new exciting organisms with just a few lines of code
Extend nature and develop new drugs with the Synthetic™ bio-programming language and the Cytostudio™ IDE Looks awesome! Molecular modelling, dynamics and design Introduction to molecular dynamics simulation Video intro to molecular dynamics simulations Watch! Wiki page: https://en.wikipedia.org/wiki/Molecular_dynamics Geometry (energy) optimization is a typical feature of these software Software: Chimera molecular modelling software system. Nice Delphi Electrostatic fields Ascalaph designer and abalone. Tutorial http://proteopedia.org/wiki/index.php/Molecular_modeling_and_visualization_software http://nanohub.org/resources/4540 https://en.wikipedia.org/wiki/Molecule_editor https://en.wikipedia.org/wiki/List_of_software_for_Monte_Carlo_molecular_modeling https://en.wikipedia.org/wiki/List_of_software_for_molecular_mechanics_modeling https://en.wikipedia.org/wiki/List_of_software_for_nanostructures_modeling Algorithmic or computational complexity The computational complexity of an algorithm is an asymptotic estimate of how the algorithm's running time scales with the size of its input. https://en.wikipedia.org/wiki/Computational_complexity_theory https://www.cs.cmu.edu/~adamchik/15-121/lectures/Algorithmic%20Complexity/complexity.html Pseudo-polynomial time: if its running time is polynomial in the numeric value of the input, but is exponential in the length of the input – the number of bits required to represent it. That is because numerical number is related to number of bits (digits in binary), , by . P vs. NP and the Computational Complexity Zoo See Algorithmic information theory Automatic theorem proving Machine learning and automatically theorem proving. Used for automatic Software validation https://en.wikipedia.org/wiki/Computer_algebra_system http://epubs.siam.org/doi/book/10.1137/1.9781611971033 http://homepages.math.uic.edu/~jan/mcs320/ Project MAC (the Project on Mathematics and Computation, later backronymed to Multiple Access Computer, Machine Aided Cognitions, or Man and Computer) Joel Cohen - Computer algebra and symbolic computation books Intelligent computer algebra system: Myth, fancy or reality? CASs Sage/numpy/sympy...
Matlab.
Mathematica.
Maple.
Maxima/Macsyma.
GAP.
Axiom. Web notebook: IPython and Jupyter notebook http://jupyter.readthedocs.org/en/latest/running.html Torch + IPython = iTorch: https://github.com/facebook/iTorch Basic Linear Algebra Subprograms A Software system for Computer algebra Kevin's Carbide and halgebra. See facebook convo. https://ideapad.io/augmenting-human-intellect/graph –>http://epsilonwriter.com/start.php <– Mathematica - Manipulation equations http://matracas.org/algebra/index.html.en https://github.com/MatthewJA/Graphical-Equation-Manipulator SIGGRAPH 2015 - Technical Papers Trailer ThreeNodes.js: vvvv "clone" in javascript/webgl https://vvvv.org/ Looks very nice Visual programming language For Deep learning for example Best GeForce GPU: GeForce Titan X. Titan Z coming soon ThinkMate computer with many GPU customization options. Computer science can refer broadly to Computer Science and IT, or more specifically to Theoretical computer science Computer science and Information technology (IT). Computer science is what came out of asking: what kind of maths can actually be effectively carried out in the physical world? Information technology is the result of actually carrying out this math, an step that required technology. http://colorfulengineering.org/SCICOMP.html Nice Math ∩ Programming blog: https://jeremykun.com/ http://en.tldp.org/HOWTO/Unix-and-Internet-Fundamentals-HOWTO/ Quantum random number generator: https://qrng.anu.edu.au/ Github DenseCap: Fully Convolutional Localization Networks for Dense Captioning Multi-scale networks and an application. Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers http://www.clement.farabet.net/research.html#parsing Hand-eye coordination. See work on grabbing objects in Robotics At steady-state, in the reference frame of the object, and neglecting
distortions induced by the flow (small Peclet number), the solute concentration in the liquid is
given by (steady state diffusion) i.e. the flux of solute is given by some space-dependent function that measures the 'surface activity' at the surface of the colloid, i.e. the generation or consumption of solute by a chemical reaction. In general,
describing this process involves additional coupled transport problems for other species involved
in the surface reactions.) Some variations are needed for the cases of Self-electrophoresis and Self-thermophoresis. Approximately, these equations give, . In particular, once the surface properties, and and the shape are given, the velocity turns out to be independent of the size of the object, showing that this method of propulsion is robust under downscaling. A certain structure in the Mind that represents a Set of Objects, often by representing a property that defines the set. http://plato.stanford.edu/entries/concepts/ Classical theory Prototype theory Theory theory https://en.wikipedia.org/wiki/Concrete_Mathematics. the topics in Concrete Mathematics are "a blend of CONtinuous and disCRETE mathematics." The term "concrete mathematics" also denotes a complement to "abstract mathematics". (by Donald Knuth, author of LaTeX!!) See AugMath. These ideas of my mathematical philosophy are also brought to life in Iconic mathematics (maths that looks like what it means): Symbols ask us to think. Icons ask us to look. The symbol 5 tells us nothing about five. The icon ||||| is five. More: http://www.wbricken.com/htmls/03words/0303ed/030304iconic.html. Another keyword, experiental mathematics, a lot of its literature is applied to education, and stays at very shallow level of the idea.. See voxel.css in css part in Frontend web development. Synthetic mathematics? http://math.andrej.com/wp-content/uploads/2007/05/syncomp-mfps23.pdf More visual & concrete mathematics "Semi-concrete": http://cognitivemedium.com/emm/emm.html Bret Victor Introducing Guesstimate, a Spreadsheet for Things That Aren’t Certain Visual arithmetic on probability distributions! http://ncase.me/ Explorable explanations!: http://explorableexplanations.com/ https://www.quantamagazine.org/20160531-set-proof-stuns-mathematicians/ https://en.wikipedia.org/wiki/Concurrency_(computer_science) https://en.wikipedia.org/wiki/Concurrent_computing See Concurrent programming, Operating system An Operating system often implements these concurrent tasks (processes and threads) by using an scheduler that determines when Condensed matter physics is the Physics of condensed matter. Below we look at the different broad types of condensed matter. The properties of condensed matter systems depend, among other things, on the chemical composition of the system (see Chemistry), and the physical laws the chemical components obey. Condensed matter refers to Bulk matter in a condensed form, i.e. one that is composed of condensed phases. Condensed phases include mainly solid, and liquid. However, generally, it is a phase for which the particles adhere to each other strongly enough (by for example Intermolecular forces or Chemical bonds) relative to their kinetic energy so that the system remains approximately bound in the absence of external forces, or where the particles are sufficiently highly concentrated so that they interact strongly (for example non-attracting particles can be forced to condense by confining them in a small volume (or by some external force, like gravity), forcing them to be "nearly touching", as in a liquid or solid). Non-condensed matter has constituents that are barely bound together, if at all, and thus often need to be confined, either naturally, or artificially, to be studied as a whole. The main types are: gases (see Fluid mechanics), and plasmas. A solid is a form of matter that can resist a considerable amount of stress without flowing (so that its only response is elastic). A fluid is a form of matter that flows under virtually any amount of stress. A viscoelastic material displays solid-like elasticity of short time scales, and fluid-like viscosity on long time scales. There is really a continuum between these. For instance some Rubbers are closer to solids, while others are more clearly viscoelastic, depending on the ratio of elastic to viscous deformation. Solid-state physics studies matter in hard form. Hard forms are characterized by strong inter-particle bonding (often Chemical bonds, when at room temperature). This bonding is strong enough that it makes the relative position of the particles essentially fixed, with thermal fluctuations making particles vibrate only a bit relative to this fixed position. It is also strong enough to resist relatively large external stresses (i.e. it doesn't flow) All forms of hard matter are solids. Soft matter physics studies matter in other condensed forms (soft forms), where some or all (relative) positional degrees of freedom are "soft", that is, strongly affected by thermal fluctuations, so that they have large variances. It also includes forms with weak bonding so that the material can't resist barely any external stress without flowing. Soft matter can be a solid or a fluid. Note that given the definition above, one expects an spectrum between the two types of matter, as the definitions involve quantities that can potentially take a continuous of values. Most materials in nature, however, can be classified in one or the other. One of the most important properties of materials is that they exhibit different phases. These are understood through the study of Phase transitions. See Paul and Lubenski's book Principles of condensed matter physics. Hard forms There are also phases of matters that exhibit quantum effects. These are studied (along with other non-quantum phases that nontheless can be understood using quantum mechanics) in Quantum condensed matter physics Order and disorder designate the presence or absence of some symmetry or correlation in a many-particle system. Disordered systems See here and here, and here Discussion Meeting: Nonlinear Physics of Disordered Systems: From Amorphous Solids to Complex Flows See Materials science for the applications of the principles of condensed matter physics to understanding and use of the wealth of materials in the world, both natural, and artificial. For the study of the physics and chemistry at the interface between two phases, see Surface science. In Information theory, the conditional entropy of a Random variable , conditioning on another random variable , is the average entropy of a random variable conditional on another random variable Some results: Where we use the Entropy and Joint entropy of the random variables. All Shannon's information measures are special cases of conditional mutual information, thus mutual information is the most general See MMathPhys course, and Critical phenomena. lecture notes Field theory with conformal invariance. Conformal invariance seems to be a generic feature of critical phenomena. Although this is not yet completely understood. Scale and conformal invariance in quantum field theory (wiki)
In physics and engineering, a constitutive equation or constitutive relation is a relation between two physical quantities (especially kinetic quantities as related to kinematic quantities) that is specific to a material or substance, and approximates the response of that material to external stimuli, usually as applied fields or forces. They are often just phenomenological, because bulk material, or a sufficiently large amount of condensed matter, is a very complex system, made of many interacting particles. However, they should be, in principle, and sometimes are in practice, derivable from principles of Statistical physics, and often Non-equilibrium statistical physics. Those constitutive relations that are used in the description of the autonomous time-evolution of a system often need Non-equilibrium statistical physics, as systems that macroscopically (i.e. the relevant averaged quantities) evolve in time are by definition out of equilibrium. Constitutive relations for driven systems, that are in quasi-equilibrium, should be derivable from Equilibrium statistical physics. Kinetic theory offers a foundation to derive constitutive equations from the microscopic details of the material. However, derivations are often hard, and give only qualitatively correct answers (more precisely, the answers are often correct up to an order constant, because of approximations). Non-equilibrium thermodynamics is often based itself on more or less phenomenological principles. However, these principles can be very useful for deriving constitutive relations for large classes of systems. An example, of one of these principles is the principle that the rate of entropy production be maximal. This is used in this paper to derive the Allen-Cahn equations used to describe the evolution of phase fields (see Phase transition). See On thermomechanical restrictions of continua for the paper proposing the above principle. I'm sure there are other approaches, and I should learn more about Non-equilibrium statistical physics in general, to learn, and organize these important ideas better. See here (page 11) and in Notes on Nonequilibrium StatPhys MT2015 Oxford (mostly stochastic processes) (page 42). Generalization of Lagrangian multipliers in finite optimization problems. Continuity of the order parameter (the probability that an occupied site is in the infinite cluster for a given occupancy ) at is an open mathematical problem in
the general case, but it is known to hold rigorously in 2D and using lace expansion methods (Mean-Field Behaviour and the Lace Expansion). The conjecture that
for remains however one of the open problems in the field (V. Beffara, V. Sidoravicius, Percolation Theory). See Nonlinear system and Nonlinear continuous dynamical system, as most interesting dynamical systems are nonlinear. A continuous dynamical system often refers to a Topological dynamical system on an infinite topological space. In this way, the system becomes a system of Ordinary differential equations. A Function between two Topological spaces is continuous if, for all , . where is the Preimage of the set . The continuum limit, if it is defined, is often a field theory. In particular, at the critical point, it is often a Conformal field theory, as percolation models at the critical point are found to have conformal symmetry. John Cardy used this idea to find crossing probabilities between the opposite sides of a conformal rectangle filled with a conformally invariant infinitesimal lattice: Critical Percolation in Finite Geometries. Smirnov rigorously proved that Cardy’s conjecture holds for the continuum limit of site percolation on a triangular lattice: Critical percolation in the plane: conformal invariance, Cardy's formula, scaling limits. Defining the continuum limit is tricky. See Correlation Functions in Two-Dimensional Critical Systems with Conformal Symmetry. . Only certain CFTs, usually the minimal models, have been observed to possess the right structure to describe a critical lattice model in two dimensions. Due to the relatively few number of such theories, models with the same macroscopic but different microscopic properties are presumed to have identical continuum limits which correspond to the same CFT characterized by the value of the central charge c. This is a restatement of the notion of universality A relatively new method to describe the continuum limit of the critical lattice models is Schramm–Loewner evolution The Mechanics of most classical types of Bulk matter can be macroscopically described via continuum mechanics, which describes matter in terms of continuum equations, based on space-time varying fields that evolve according to Differential equations. See Rheology, for the study of flow in particular. Normal control systems are usually classified as linear systems and nonlinear systems. A switched system consists of continuous-time=discrete-time dynamical subsystems and a rule (supervisor) that determines the switching among them. Control techniques by switching among different controllers have been applied extensively in recent years. Indeed, a switched controller can provide a performance improvement over a fixed controller Switched systems consist of a decision layer and a control layer. The former is logical, i.e., discrete, and decides at a given time, which subsystem is activated. The latter usually corresponds to a set of normal control systems. Controllability and reachability criteria for switched linear systems Switching in Systems and Control On partitioned controllability of switched linear systems Finite automata approach to observability of switched Boolean control networks. Boolean control networks I think are Boolean networks with an external control. It is pointed out that “One of the major goals of Systems biology is to develop a control theory for complex biological systems” [14] See also Robotics Chico Camargo
Hey man!
I hadn't seen the recording, that's cool! I'll send you the slides via email. What diagrams do you want, just to make sure I send you the right thing?
Guillermo Valle Pérez
4/22, 12:56pm
Guillermo Valle Pérez
Well the ones where you show the tree of binary states evolving to other states, and the ones showing the complexity vs frequency for example
Chico Camargo
4/22, 12:59pm
Chico Camargo
Here's the whole thing - https://docs.google.com/presentation/d/1l-IgqXy1ZdBn__aBQX0fUH8Z2y6iuogwt753omsiyAw/edit?usp=sharing 29-06-2015 - Evolution 2015 Guaruja
What Darwin didn't know: natural variation is structured Chico Camargo University of Oxford Evolution 2015 Guarujá, Brazil
docs.google.com
Chico Camargo
4/22, 1:01pm
Chico Camargo
Just one thing - recently I've changed my definition of phenotype to something more coarse-grained, so the plots for complexity have changed. But they're fine, the new ones say the same as the old ones. The robustness things, however, don't apply so directly to the new phenotype definition I've been exploring, so I would not include that part about robustness. All the rest is fine!
Chico Camargo
4/22, 1:03pm
Chico Camargo
Finally, an interesting paper, in case you haven't seen it:
http://rsif.royalsocietypublishing.org/content/royinterface/12/113/20150724.full.pdf
Chico Camargo
4/22, 1:03pm
Chico Camargo
They say some cool stuff there, like "The properties of genotype –phenotype (GP) maps have been studied in great detail for RNA secondary structure. These include a highly biased distribution of genotypes per phenotype, negative correlation of genotypic robustness and evolvability, positive correlation of phenotypic robustness
and evolvability, shape-space covering, and a roughly logarithmic scaling
of phenotypic robustness with phenotypic frequency. More recently similar
properties have been discovered in other GP maps, suggesting that they may
be fundamental to biological GP maps, in general, rather than specific to the
RNA secondary structure map."
Guillermo Valle Pérez
4/22, 2:53pm
Guillermo Valle Pérez
Yeah i've seen that paper. I've found a way to predict more or less the number of sequences that map to the most frequent sequences, in average over the ensemble of transducers. It's only approximate, but it is about thinking about certain kinds of cycles, and how simpler cycles in the transducer are more probable, so sounds similar to your boolean network cycles thing.
It's also related to the idea about constrained and unconstrained parts, which I think is the most fundamental. The idea for transducers is that there are states that give the same output independent of the output (so they are unconstrained). Outputs that admit cycling through these states have most of the input bits unconstrained. Then if one looks at what kinds of cycles there are, one sees that the most probable are the simplest ones, and these correspond to simple outputs
Guillermo Valle Pérez
4/22, 2:59pm
Guillermo Valle Pérez
It's not totally rigorous, but estimating the probabilities of these cycles roughly gives the right frequency of the most probable strings like 11111111.. 011111111, 101010101... etc
Chico Camargo
4/22, 3:02pm
Chico Camargo
That is very interesting!
On the boolean networks it turns out that the probability of a cyclic output is an exponential with the cycle length, but the complexity bias exists even for cycles of the same length
I still don't understand how the GP map works though, maybe because I don't fully understand what a transducer really is. What is the genotype and the phenotype (and the mapping), in their case?
Guillermo Valle Pérez
4/22, 3:04pm
Guillermo Valle Pérez
grrwhen i said above "same output independent of the output" i meant "same output independent of the input"...
Chico Camargo
4/22, 3:04pm
Chico Camargo
Oh yeah I god that grin emoticon
So, I understand that a finite state transducer is like a finite automaton, but with two tapes: an input tape and an output tape
reads a tape and writes another
Guillermo Valle Pérez
4/22, 3:05pm
Guillermo Valle Pérez
http://galaxy.eti.pg.gda.pl/katedry/kiw/pracownicy/Jan.Daciuk/personal/thesis/img74.gif Guillermo Valle Pérez
4/22, 3:07pm
Guillermo Valle Pérez
its a finite state machine. You start at a certain state and move to the state according to the symbol you read and following the transition according to the first symbol in "x/y". When you follow that transition you print a "y"
oh sorry
in that picture i showed you the x and y are swapped
relative to my convention
that picture isnt ver good
wait
Guillermo Valle Pérez
4/22, 3:09pm
Guillermo Valle Pérez
Guillermo Valle Pérez
4/22, 3:09pm
Guillermo Valle Pérez
thats one of the ones generated by my actual code
sideways lol
Chico Camargo
4/22, 3:09pm
Chico Camargo
Beautiful!
Ok, it's a finite state machine with two tapes, rather than just traversing (and accepting or rejecting) an input string, it translates an input string to an output string
Guillermo Valle Pérez
4/22, 3:09pm
Guillermo Valle Pérez
so you begin at state 0 and if you see a 0 you go to state 0 printing a 0, and if you see an 1 you go to state 1 also printing a 0
Chico Camargo
4/22, 3:09pm
Chico Camargo
Cool
So you randomly generate a finite state transducer, and see how many input words give you each output word?
Guillermo Valle Pérez
4/22, 3:10pm
Guillermo Valle Pérez
Yep
Chico Camargo
4/22, 3:11pm
Chico Camargo
And the trends are the same even if you generate a lot of those transducers at random?
Guillermo Valle Pérez
4/22, 3:11pm
Guillermo Valle Pérez
yeah the more you generate the more the graph seems to be a linear thing with a given spread in the frequency-vs-complexity, like the one i posted
technically you could enumerate all transducers of a given number of states
Chico Camargo
4/22, 3:13pm
Chico Camargo
So the trend is there even if you have a single transducer, but it's more obvious if you plot the results for a lot of them, is that what you're saying?
I have more questions:
How long are the input strings?
How long are the output strings?
You said you can enumerate them. How is the transducer represented?
I was reading some stuff about formal language theory last night, and it relates so much to that, you have no idea
Guillermo Valle Pérez
4/22, 3:14pm
Guillermo Valle Pérez
Well, the trend is mostly visible if you plot a lot of them. For a single one I tend to find quite some noise.
The input strings ive tried are 9-15 bits long, but they can be anything
I've made it so that the output strings are the same length as the input. I could make the variable length by adding an "empty" symbol as a possibility
but havent tried that
The transducers are represented as strings too I think, but I'm using a finite automaton generator, not generating them on my own
because it's not that trivial to generate them really uniformly apparently. I think it's because many automatons would be equivalent, and it only generates distinct ones..
Chico Camargo
4/22, 3:17pm
Chico Camargo
I see
Guillermo Valle Pérez
4/22, 3:17pm
Guillermo Valle Pérez
Tho I'm not sure how it's generating them under the hood tbh, atm
Chico Camargo
4/22, 3:17pm
Chico Camargo
Sure
Guillermo Valle Pérez
4/22, 3:18pm
Guillermo Valle Pérez
and i dont think the answer should be too different if you generated them in a more naive way
Chico Camargo
4/22, 3:18pm
Chico Camargo
I agree with you
One thing I'm trying to understand is what space is being mapped to what space
But I'm slowly getting it
Any string to any string.
(well, binary strings in both alphabets, in this case)
Guillermo Valle Pérez
4/22, 3:21pm
Guillermo Valle Pérez
yeah you can choose any alphabet. But i chose binary. and in my case its any binary string to given length to the same set
well no there are some strings you can't get in the output
so the output space is some subset of {1,0}^*
{1,0}^n, n fixed i mean
Chico Camargo
4/22, 3:23pm
Chico Camargo
And there are binary strings that can't be generated by that transducer. Sure
Guillermo Valle Pérez
4/22, 3:23pm
Guillermo Valle Pérez
Yeah
quite a few actually
which make sense
Chico Camargo
4/22, 3:24pm
Chico Camargo
It does.
Guillermo Valle Pérez
4/22, 3:24pm
Guillermo Valle Pérez
becuase if there is redundancy, the phenotype space must be smaller, for a deterministic map
smaller than genotype space
Chico Camargo
4/22, 3:25pm
Chico Camargo
And because each transducer will in fact produce strings of a given shape,
like "0 1^n 0 1 0^m 1"
And when you choose the number of states in your transducer.. I would imagine
I imagine very large transducers would be unnecessarily complex
Guillermo Valle Pérez
4/22, 3:27pm
Guillermo Valle Pérez
well i choose a small number of states, like 5 so that it's simple
Chico Camargo
4/22, 3:28pm
Chico Camargo
Yeap
Guillermo Valle Pérez
4/22, 3:28pm
Guillermo Valle Pérez
I've tried more states and results are not too different, but I worry that I am taking a sample that is much smaller than all possible trandsucers of that size
With smaller number of states like 2 or 3, maps seem too trivial also
Chico Camargo
4/22, 3:30pm
Chico Camargo
I'd expect that the complexity bias would become too messy if the transducers were too large: you'd be using a very complex algorithm to map input to output, introducing a lot of complexity into the business
I find it interesting that when you sample different transducers, you're sampling different GP maps.
...which is something that can evolve, as well. Just like you can change the parameters of an ODE instead of changing its initial conditions, you can change the GP map instead of its I/O
Guillermo Valle Pérez
4/22, 3:32pm
Guillermo Valle Pérez
what i can't quite figure is how to relate these results more directly to other results like that of the boolean network or polyominoes. Yeah, in principles these things should map to a transducer, but how simple a transducer, and do they have some features that simply the {ensemble of all transducers} does not capture. I mean this is precisely the same problems with choosing random network null models in network theory..
Guillermo Valle Pérez
4/22, 3:32pm
Guillermo Valle Pérez
Yeah i also expect more noise for more states..
Chico Camargo
4/22, 3:33pm
Chico Camargo
So, normally a boolean network is the genotype, so your input string in this case. Same for an RNA sequence. Your transducer would be the actual map
Guillermo Valle Pérez
4/22, 3:35pm
Guillermo Valle Pérez
"which is something that can evolve". Yeah the whole reason I did was just in the spirit of null models: see if one expects these features just looking at random simple maps, without any other constraint.
But I also thought about, why choose the transducers uniformly at random, why not sample them according to the biased output of another transducer, that will produce simpler transducers more often. One can imagine a potentially infinite chain of GP maps determining GP maps, and it'd be interesting to see what one gets..
Guillermo Valle Pérez
4/22, 3:35pm
Guillermo Valle Pérez
Yeah the whole reason I did -> Yeah the whole reason I did this
Chico Camargo
4/22, 3:36pm
Chico Camargo
I agree with the spirit of null models: that is totally the point
Hoho, I know what you mean!
In fact I think there is something else to it
Guillermo Valle Pérez
4/22, 3:37pm
Guillermo Valle Pérez
Yeah this looks like whats called hyperparameter optimization in machine learning: when you optimize your machine learning model itself
Or genetic programming with evolving GP maps, which has also been tried
Another way to do this would be to make a transducer whose output changes the transducer itself, and see how that evolves
which tbh sounds like the whole idea of genetic regulatory networks where the phenotype (proteins) in some sense change the GP map (genes->proteins)
I guess when one does this one can then still define a meta GP map like what you do in the boolean networks
Chico Camargo
4/22, 3:41pm
Chico Camargo
I think so
Coupling GP maps is an interesting idea
But you've gotta play with the timescales that that involves
For example
- also on that potentially infinite chain of GP maps you mentioned:
Chico Camargo
4/22, 3:42pm
Chico Camargo
So, this chain of GP maps determining GP maps is sequential: once, in the history of life, life "chose" a set of basepairs, A-T, C-G. And it's been working pretty much with all that. And by "choosing" I mean that its rate of change slowed down. It could be from reaching a fitness peak, local or not, but the fact is that it slowed down.
Then, at some point, life "chose" a genetic code: the way codons map to aminoacids. Once that choice was frozen, life has been working with it ever since.
Then it chose some protein families. It chose chromosomes. Yada, yada, yada: (pretty much) frozen choices allowing more complex forms to emerge. And you could argue that the genetic code and these other things are still changing, but they're just changing very slowly, while other things change more quickly
Guillermo Valle Pérez
4/22, 3:43pm
Guillermo Valle Pérez
Hm, I see what you mean
Chico Camargo
4/22, 3:43pm
Chico Camargo
In a similar fashion, I see that with language. We aren't really changing our alphabet, or our grammar structures anymore, it seems like those evolved once and stopped, but they're just changing slowly. On the other hand, new words still appear all the time
I think it makes total sense to get the transducer from a set of transducers
- but if you're picking a simple transducer, you're probably already doing that
Guillermo Valle Pérez
4/22, 3:44pm
Guillermo Valle Pérez
well im picking simple ones in the sens of small number of states
Chico Camargo
4/22, 3:45pm
Chico Camargo
If the transducer can really be represented as a string, then I'd be sure of that
Guillermo Valle Pérez
4/22, 3:45pm
Guillermo Valle Pérez
but i haven tried generating transducers from a transducer
yet
but in your example above
it seems like it would like generating random transducers and then fixing to one. Then using that fixed transducers as maybe a building block out of which new meta transducers can be built...
tho im not sure i understand where GP maps fit in all the biological examples you mention above
Chico Camargo
4/22, 3:49pm
Chico Camargo
a GP map is a translation, an I/O machine, a transducer. Something that converts information of a kind into information of another kind
Guillermo Valle Pérez
4/22, 3:50pm
Guillermo Valle Pérez
first the atcg is an alphabet, not a GP map right? Then it evolved the codon-aminoacid, which i see its a GP map. What is the protein family, and what do the chromosomes have to do with a GP map?
i mean I would understand that gene-> protein is a GP map. Then protein->some cellular phenotype is another one..
Chico Camargo
4/22, 3:52pm
Chico Camargo
Ok, point taken, the ATCG is not a GP map. It is an alphabet.
Let me put it this way:
Chico Camargo
4/22, 3:59pm
Chico Camargo
Nature chooses a way store information, then it pretty much settles for that one according to some criteria like thermodynamical stability and to how much information you can store with that system - for instance, ATCG basepairs. Or, another way to store information, aminoacids. So now we have two alphabets, one with four letters, one with ~20.
Then, once those had been pretty much chosen, eventually nature chose/found a way to translate between them. Or maybe it found the latter alphabet as an outcome of finding the GP map that converts information stored in DNA sequences to information stored in aminoacid sequences. But anyway, it chose the alphabets, then it chose the GP map.
Focusing on the GP maps: DNA-> Proteins, Protein shape-> Protein function in the cell, gene networks -> cellular phenotype, cell type composition -> tissue structure/function/identity, whatever mapping from one kind of information to another (but just mappings, so nothing about the chromosomes I had mentioned). My point is that I think often nature tries many "transducers", many I/O machines, and eventually it chooses some of them, and builds on top of them. So the I/O alphabets and GP maps are conserved along evolution.
In this sense, humans probably use the same cell types as other apes. And we all use the same body plans as other mammals. And the same embryonic development genes as worms. Etc etc downards, ad infinitum.
Chico Camargo
4/22, 4:01pm
Chico Camargo
I'm saying this because some structures evolve quickly and others don't: in the hierarchy of which genes regulate which other ones, the further up a gene is placed, the less it changes over time: the more conserved it is.
And I think that makes total sense, considering that it is part of a GP map that was "chosen" long ago
Guillermo Valle Pérez
4/22, 4:02pm
Guillermo Valle Pérez
and because many things depend on it, it's hard to change right?
Chico Camargo
4/22, 4:02pm
Chico Camargo
that too
in theory you could change it, but today that'd mean a drastic reduction on fitness
it'd be like trying to reinvent the genetic code: it won't work, life relies too much on that
Guillermo Valle Pérez
4/22, 4:03pm
Guillermo Valle Pérez
that's what I mean, unless you change many things along with it, in just the right ways.. which is highly unlikely
Chico Camargo
4/22, 4:03pm
Chico Camargo
Exactly
That'd be like changing the English grammar, or semantics
On the other hand, if the evolutionary innovation is pretty fresh, there's probably not much relying on it, so it's ok to break it
Guillermo Valle Pérez
4/22, 4:06pm
Guillermo Valle Pérez
This is just why it's so hard to say switch from qwerty to dvorak keyboards, it's changing your whole word-hand movement GP map, on which your whole internet life depends
Chico Camargo
4/22, 4:06pm
Chico Camargo
haha
yeah!
So, I think the easiest story you can tell is that a transducer is a very simple a GP map, without all the biological details.
Which features did you say are not captured by the transducers?
Guillermo Valle Pérez
4/22, 4:09pm
Guillermo Valle Pérez
Well in theory all GP maps should potentially be expressed as transducers, though probably of many more states. Having 5 states is like considering the set of sufficiently coarse-grained biological models I suppose..
Chico Camargo
4/22, 4:11pm
Chico Camargo
Hm, there is one thing I still don't see
What you said resonates very well with the ideas in that paper I sent you: that all these properties come from the sequence nature of genotypes and phenotypes
or genotypes at least.
and sequences = I/O strings, great
Guillermo Valle Pérez
4/22, 4:13pm
Guillermo Valle Pérez
If you consider any number of states, transducers pretty much include everything else.. But I'm only considering simple ones.
A simple transducer can either be justified as some process during the earliest stages of evolution of some form of life (natural or artificial) where the system itself is actually simple. Say a few dots in game of life, or a few molecules.
Then the justification to apply to more complex life is probably the same as why we use coarse-grained models: yeah life is full of intricate details, but it is organized in such a way that is approximately simple.
I think this would like saying complex life is really behainv as a transducer with many many states, but this transducer is coarse-grainable to a transducer of few states.
This actually agrees nicely with the idea that life current GP maps were in some way determined by previous GP maps, and thus are expected to be simpler than just a random GP map from ATCG to tissue...
Chico Camargo
4/22, 4:14pm
Chico Camargo
Yeah
I'm happy with that
there is only one thing that I still fail to agree/understand
A transducer translates input to output by treating the input as an (ordered) string: it first reads the first character, then the second, then the third
And even though the map in the end is from string A to string B, it's calculated from this step-by-step reading
Guillermo Valle Pérez
4/22, 4:15pm
Guillermo Valle Pérez
that's how it "mechanically" works yeah
Chico Camargo
4/22, 4:16pm
Chico Camargo
Yeah
But for example, a gene network. The network, really. It can be written as a string, but you can also do any permutations on the gene order, and that'd give you a different string.
The GP map for gene networks is also from string to string, but it doesn't rely on reading anything step by step
and it's harder for me to talk about "what parts of the string are unconstrained", for instance
Guillermo Valle Pérez
4/22, 4:17pm
Guillermo Valle Pérez
"written as a string, but you can also do any permutations on the gene order, and that'd give you a different string." but it'd give you the same network, you mean?
Chico Camargo
4/22, 4:17pm
Chico Camargo
It'd give you a network that is isomorphic to it (like, B->A instead of A->B). That could possibly give you the same phenotype
or an isomorphic phenotype. Hm
Guillermo Valle Pérez
4/22, 4:18pm
Guillermo Valle Pérez
but i mean, if you have some map from finite strings to finite strings, it is always in principle writeable as a finite transducer
Chico Camargo
4/22, 4:18pm
Chico Camargo
Oh. That's true.
Guillermo Valle Pérez
4/22, 4:18pm
Guillermo Valle Pérez
the transducer may be quite large though
Chico Camargo
4/22, 4:18pm
Chico Camargo
There's a theorem that does that, right?
that says that
shit. That's awesome.
Guillermo Valle Pérez
4/22, 4:19pm
Guillermo Valle Pérez
yeah... I mean it's kind of floating around the results of turing and church and co. I think
A Turing machin is just a kind of finite transducer with infinite memory
so like infinite number of states..
Chico Camargo
4/22, 4:20pm
Chico Camargo
yeah yeah
Guillermo Valle Pérez
4/22, 4:20pm
Guillermo Valle Pérez
But the thing is that a map between two finite sets can always be expressible with finite memory
Chico Camargo
4/22, 4:20pm
Chico Camargo
no, there is, I'm sure
I read this theorem yesterday, it's all coming back
Guillermo Valle Pérez
4/22, 4:20pm
Guillermo Valle Pérez
ah cool
Chico Camargo
4/22, 4:21pm
Chico Camargo
What I actually read:
Guillermo Valle Pérez
4/22, 4:21pm
Guillermo Valle Pérez
You can make your own! Map any finite set of inputs to any output: http://examples.mikemccandless.com/fst.py?terms=pepe%2F33%0D%0Amoth%2F1%0D%0Apop%2F2%0D%0Astar%2F3%0D%0Astop%2F4%0D%0Atop%2F5%0D%0A&cmd=Build+it!
examples.mikemccandless.com
examples.mikemccandless.com
Chico Camargo
4/22, 4:23pm
Chico Camargo
There is a correspondence between formal grammars (sets of strings) and automata (that might accept or reject a string, saying that it does or doesn't belong in that grammar).
Finite grammars map to finite automata,
Context-free grammars to push-down automata, and so on,
Until phrase structure grammars that map to Turing machines.
Grammars and finite automata are slightly different from FST, but I'm sure there must be a version of that theorem that talks about FST.
Chico Camargo
4/22, 4:24pm
Chico Camargo
Ah brilliant!
That's really interesting, since any finite set is enumerable (and more specifically enumerable in binary), any finite set can be translated to strings.
But that alone doesn't mean that you would have any bias of any sort
Now it's clear to me that it isn't about the sequence order, as in which part comes first, but simply from the hypercube nature of the space of sequences
Guillermo Valle Pérez
4/22, 4:31pm
Guillermo Valle Pérez
See page 20 of http://web.cs.ucdavis.edu/~rogaway/classes/120/spring13/eric-transducers.pdf there it effectively says what we want, that they can encode any map between finite sets
web.cs.ucdavis.edu
web.cs.ucdavis.edu
Guillermo Valle Pérez
4/22, 4:32pm
Guillermo Valle Pérez
what is comes "from the hypercube nature of the space of sequences"?
Chico Camargo
4/22, 4:32pm
Chico Camargo
yup! I see it
Guillermo Valle Pérez
4/22, 4:33pm
Guillermo Valle Pérez
Also the bias comes from constraining the maps to be simple in the sense of few states in the fst, I think. Clearly if you considered all possible maps between two sets there wouldn't be bias in average
i meant: what comes "from the hypercube nature of the space of sequences"?
my internal fst is making so many mistakes..
Chico Camargo
4/22, 4:34pm
Chico Camargo
I agree that if you considered all the possible FST you wouldn't get any nice average, you need simple FSTs
haha have you switched to dvorak?
Chico Camargo
4/22, 4:34pm
Chico Camargo
What I mean is that the paper I sent you they say:
"The Fibonacci GP map therefore offers strong evidence that the sequential nature of biological information determines the fundamental structure of GP maps, which in turn has a profound impact on the course of biological evolution."
Guillermo Valle Pérez
4/22, 4:35pm
Guillermo Valle Pérez
no i meant internal as in in the brain. I wanted to swithc to dvorak but havent had time tongue emoticon
Yeah I didn't get that part
Chico Camargo
4/22, 4:35pm
Chico Camargo
And when they say "the sequential nature", I think it suggests that the fact that information is stored in ordered sequences. But I think it isn't so much about that
Guillermo Valle Pérez
4/22, 4:35pm
Guillermo Valle Pérez
Yeah I thought it was more about it having constrained and unconstrained parts
which doesnt say anything about how the information is stored/read
Chico Camargo
4/22, 4:36pm
Chico Camargo
Yeah. I think the order doesn't really matter.
exactly.
and the unconstrained parts could be in the beginning, middle, end, or just have no order
Guillermo Valle Pérez
4/22, 4:37pm
Guillermo Valle Pérez
yeah, in fact in the fsts unconstrained parts are not a fixed portion of the input string, but depend on the previous portions of the input string
Chico Camargo
4/22, 4:37pm
Chico Camargo
as long as you have an unconstrained part whose contribution to the designability of your phenotype size grows exponentially (or just a lot) with the size of the unconstrained part: like 4^L, in the case of RNA, or 2^L in the binary case
Guillermo Valle Pérez
4/22, 4:38pm
Guillermo Valle Pérez
"An unconstrained part" should more correctly be a property of the FST mechanism than a part of the input string. In the FSTs, an unconstrained part is a state whose outputs are the same irresepective of input.
Chico Camargo
4/22, 4:39pm
Chico Camargo
indeed
unconstrained means ignored by the GP map
Guillermo Valle Pérez
4/22, 4:40pm
Guillermo Valle Pérez
In the case of the FSTs as you grow the length of the input, the input has more chances of looping through these states and everytime you go through it the number of possibilities multiplies by 2, so it grows almost exponentially
Chico Camargo
4/22, 4:40pm
Chico Camargo
so if the GP map doesn't care about sequence/string order, the unconstrained parts of the genotype won't be ordered, won't be "after a stop codon"
yeah
Guillermo Valle Pérez
4/22, 4:41pm
Guillermo Valle Pérez
well there may be some order to them, but may be more complicated and subtle, and not-apparent
Chico Camargo
4/22, 4:43pm
Chico Camargo
but see, a gene network isn't ordered per se. When you decide to represent it as a string, sure, you've ordered it. For the same GP map, different orderings will produce different transducers, and therefore different orderings of the unconstrained parts, but there is no inherent order on a gene network
Chico Camargo
4/22, 4:44pm
Chico Camargo
There is, though, an exponential contribution to designability: 3^X, where X is the number of "unconstrained" interactions
Guillermo Valle Pérez
4/22, 4:44pm
Guillermo Valle Pérez
And you can actually construct GP maps where the bias is towards some designed complex sequence instead of towards simple ones. However these require very special kinds of structures with the sequence coded into it, while a bias towards a simple output requires a simple structure, and thus appears often in FSTs
Chico Camargo
4/22, 4:45pm
Chico Camargo
precisely.
Guillermo Valle Pérez
4/22, 4:46pm
Guillermo Valle Pérez
yeah i agree that for the network the output shouldnt depend on the ordering. However, maybe due to the nature of the FST different ordering conventions may need different FSTs
Chico Camargo
4/22, 4:46pm
Chico Camargo
Yeah
Guillermo Valle Pérez
4/22, 4:47pm
Guillermo Valle Pérez
"3^X, where X is the number of "unconstrained" interactions". what are the uncstrained interactions?
Chico Camargo
4/22, 4:47pm
Chico Camargo
the genotype for a gene network is the network's directed graph
each link between nodes A and B can be +, -, or non-existent (0). That's what I called 'interactions': these links
Guillermo Valle Pérez
4/22, 4:49pm
Guillermo Valle Pérez
and it is unconstrained if it doesnt affect the phenotype you define?
Chico Camargo
4/22, 4:50pm
Chico Camargo
Exactly. When I said "There is, though, an exponential contribution to designability: 3^X, where X is the number of "unconstrained" interactions", I meant that if there are X interactions that can be a +, a - or an 0 and that won't make a difference for the resulting phenotype, they'll be increasing the designability of that phenotype by 3^X.
Guillermo Valle Pérez
4/22, 4:53pm
Guillermo Valle Pérez
It'd be interesting to find the actual FST for the network description to cycle/phenotype
and see if those unconstrained parts can be seen as unconstrained states in the fst
Chico Camargo
4/22, 4:54pm
Chico Camargo
I mean, if for every GP map there is a FST, it should be
I'm still pondering
This sequence story: The cause for all these properties would not be "sequential nature of biological information", but the fact that in nature you often have unconstrained parts whose contribution to the designability of your phenotype size grows exponentially.
Guillermo Valle Pérez
4/22, 4:58pm
Guillermo Valle Pérez
Yeah
Chico Camargo
4/22, 4:58pm
Chico Camargo
Well, it's from the same principles behind PCA, that most things need a short description
that's sloppiness, essentially
Guillermo Valle Pérez
4/22, 4:58pm
Guillermo Valle Pérez
PCA?
Chico Camargo
4/22, 4:59pm
Chico Camargo
Principal Component Analysis. It's a technique that effectively reduces the dimensionality of a dataset by finding a set of axes (the base, in linear algebra terms) where most of the variation in your dataset can be described by the first axes
Guillermo Valle Pérez
4/22, 5:00pm
Guillermo Valle Pérez
Ah yeah. Yeah I mean, we are trying to find (at least part) of the explanation of the simplicity in the word smile emoticon
Chico Camargo
4/22, 5:01pm
Chico Camargo
in the word and in the world? wink emoticon
Guillermo Valle Pérez
4/22, 5:02pm
Guillermo Valle Pérez
haha yeah lucky mistake
Chico Camargo
4/22, 5:02pm
Chico Camargo
Man, I'm really hungry, I gotta get some lunch
But let's keep talking about this! This is really exciting, and it's awesome to talk to you about that grin emoticon
Also, would you send me your code so I play with it as well?
Guillermo Valle Pérez
4/22, 5:04pm
Guillermo Valle Pérez
Sure, i'll put it on github and share! And yeah we'should talk again
Chico Camargo
4/22, 5:04pm
Chico Camargo
Sweet!
See you later then!
Guillermo Valle Pérez
4/22, 5:04pm
Guillermo Valle Pérez
like emoticon
Guillermo Valle Pérez
4/22, 5:18pm
Guillermo Valle Pérez
https://github.com/guillefix/fst-bias guillefix/fst-bias
fst-bias - Code for the exploration of bias for simplicity in the output of random finite state transducers
github.com
Chico Camargo
4/22, 5:52pm
Chico Camargo
Cheers! http://cs231n.github.io/convolutional-networks/ http://cs231n.stanford.edu/syllabus.html Convolution In The "c1 feature maps" are a set of 2D arrays of neurons. Each array looks for a feature, and a point in the array would represent the location of that feature. To accomplish this, that point of that array is connected to a set of pixels centered in the corresponding point in the input image (an array of pixels). We have much less parameters because for each of these 2D arrays we only specify the parameters for one of the neruons in that array, all other neurons are identical, just connected to displaced sets of pixels. What is convolution. Correlation. Flip parameters vector (or array..) and rewrite the correlation, we get a convolution. Of course, there's much more to convolutions, including the convolution theorem for e.g. Stride How much you jump in pixel space (or in previous layer) when you move from one point to another in a feature layer. Can also expand boundary (zero padding) to keep layer gotten by convolution is of same size as original layer. Pooling This is what it does. It downsamples. For memory, and invariance (being more insesitive to perturbations). We can also apply non-linearities in between layers of course, like for contours enhacement Use as many of these layers Iconvolutions and poolings) as we can train, 20+ (Deep learning) At the end we may have a fully connected neural layer, to do the classification, but researchers are questioning if it is that useful.. We may visualize the features in the feature maps by visualizing the matrices of parameters. Sentence DynConvNet Document models (Misha Denil) Cosine similarity (a.k.a. Salton's cosine) is a measure of structural similarity of two nodes in a network. It counts the number of common neighbours of nodes i and j (given by ) and divide by the geometric mean of the degrees of i and k: where is the cosine similarity. The formula is the same as the cosine of the angle between the column of node i and row of node j considered as vectors, hence the name. Creating new maths is often done by generalizing old maths to new places. See Generalized function, or how the reals generalized the fractions, etc. See book by Cardy, and others.. See Renormalization group for a main idea in critical phenomena. Also see lectures by Balakrishnan on noneq statphys and Kardar on statphys of fields. See Critical phenomena in percolation Acoustic Emission and Critical Phenomena: From Structural Mechanics to Geophysics Energy Emissions from Critical Phenomena and Applications to Structural Health Monitoring Critical Phenomena in Natural Sciences: Chaos, Fractals, Selforganization Conformal Invariance and Applications to Statistical Mechanics Nonequilibrium Critical Phenomena and Phase Transitions into Absorbing States See Non-equilibrium statistical physics Directed percolation UNIVERSAL SCALING BEHAVIOR OF NON-EQUILIBRIUM PHASE TRANSITIONS Are Damage Spreading Transitions Generically in the Universality Class of Directed Percolation? Conformal field theory Critical phenomena in Percolation occurs at the critical value of the occupation probability corresponding to the Percolation phase transition, which separates the percolating and the non-percolating phases. Percolation models at the critical point show several interesting critical phenomena: There are a number of scaling hypotheses for several quantities for percolation near criticality (see Renormalization group for origin of scaling hypotheses). . It is believed that when , the percolation process behaves roughly in the same manner as percolation on an infinite regular tree and their critical exponents take on the corresponding values given by mean-field theory Renormalization Group Theory - Percolation. In particular, see here. A real-space renormalization group for site and bond percolation See also here. Culture (/ˈkʌltʃər/) is, in the words of E.B. Tylor, "that complex whole which includes knowledge, belief, art, morals, law, custom and any other capabilities and habits acquired by man as a member of society." https://en.wikipedia.org/wiki/Culture Polymath quest Social media Backup data. Mind data. See DB\Cosmos, etc.... Dropbox.. Natural Open sets forming a basis a Product topology Definition. This is not right I think, he is defining open cylinders which form a subbase (and he's not even defining all open cylinders). See Product topology for more. The world is full of information, much more than can be captured in this TiddlyWiki and its Cosmography section, and elsewhere.. There are many people who have made great efforts to make a lot of information readily available in an organized fashion (data), and also to make sense of it (knowledge), for example by visualizing it. United Nations Statistics Division Gapminder Hans Rosling website https://twitter.com/explorables http://bionumbers.hms.harvard.edu/ Main portal for all of Wikipedia content: https://en.wikipedia.org/wiki/Portal:Contents https://en.wikipedia.org/wiki/Category:Indexes_of_topics https://en.wikipedia.org/wiki/User:West.andrew.g/Popular_pages
http://wikitop.alwaysdata.net/wikitop_en_portal.html Wiki Portal:Contents/Reference Dictionaries, for instance: https://en.wiktionary.org/wiki/Wiktionary:Main_Page https://en.wikipedia.org/wiki/Category:Main_topic_classifications https://en.wikipedia.org/wiki/Special:AllPages http://nptel.ac.in/course.php?disciplineId=111 TW, https://github.com/ether/etherpad-lite, http://kune.cc/ Collections of technical books: https://www.safaribooksonline.com/learn/ Libgen, sci-hub.io, bookzz . org https://www.reddit.com/r/Scholar/comments/3bs1rm/meta_the_libgenscihub_thread_howtos_updates_and/ Data compression refers to the problem of finding a code that makes the average length of an encoded message as short as possible. This is sometimes called "source coding" because the most compressed code depends on the properties of the Information source producing the message. https://en.wikipedia.org/wiki/Data_compression Lossless compression Lossy compression Compression - Computerphile Entropy in Compression - Computerphile Codes used for Data compression Symbol codes Stream codes Finite State Entropy - A new breed of entropy coder Good blog on data compression advancements: http://fastcompression.blogspot.co.uk/ (IC 1.2) Applications of Compression codes http://crunchbase.linkurio.us/demo/ https://www.crunchbase.com/#/home/index Machine learning data sets IMAGENET semantically categorized image database WordNet. Semantically structured and linked word database See Information theory for more details. See also Communication theory Data transmission refers to the transfer of information from one entity to another, by means of a Data transmission system . See more here: Data transmission system Properties of data transmission systems Types of communication channel Types of transmitter/receivers These are mostly specified by the code they use. In data transmission systems, these are mostly Error-correcting codes. The main desired properties of a data transmission system, and thus the main subjects of study are: Thus the main problem of study is: for a particular communication channel, find code so that data transmission rate is as high as possible, while receiver receives the information with negligible probability of error. This is sometimes called "channel coding" because the most reliable code depends on the properties of the channel. This is done by finding codewords (sequences of input values) such that their images are as disjoint as possible. This is equivalent to sphere packing in high dimensions. The main result in data transmission theory is the Channel coding theorem, which gives a fundamental limit to the data transmission rate that can be achieved by a code, while keeping error rates negligible. This limit turns out to be the Channel capacity. The goals stated above for a data transmission system are achieved in two main ways: See http://pfister.ee.duke.edu/thesis/chap1.pdf, and other chapters. A data transmission system is the middle part of a Communication system, composed of: https://en.wikipedia.org/wiki/Data_type In computer science and computer Programming, a data type or simply type is a classification identifying one of various types of data, such as real, integer or Boolean, that determines the possible values for that type; the operations that can be done on values of that type; the meaning of the data; and the way values of that type can be stored. static vs dynamic typing: http://stackoverflow.com/questions/1517582/what-is-the-difference-between-statically-typed-and-dynamically-typed-languages (see third answer). How do compiled dynamically typed languages work? Do they store type data (unlike it says here?) https://en.wikipedia.org/wiki/Type_inference Integer Float also known as Price's model. The de Solla Price's model is a model used to explore the effect of preferential attachment in the formation of a network on the structure of the network. See Models of network formation for more information. Proposed in the study of citation networks. These have properties: The model defines the average number of papers cited by a new paper (i.e. the average out-degree) to be (and the distribution around to be sufficiently well-behaved, for instance, the variance should be finite). The main assumption of the model is that the probability of each new edge created whew we add a new node only depends on the degree of that node (on the in-degree to be precise, i.e. the number of citations it has). In particular it assumes an affine preferential attachment: where is the in-degree, and we have made use that for directed networks, . Finally, is introduced so that nodes can get edges even if they don't have any in-degree yet (otherwise they will always stay like that, and the model wouldn't really be realistic). Note that a new paper can cite an existing paper more than one times in this model, but the frequency at which these double-edges occur is low, and in the limit they are subdominant. is the probability that a new edge is connected to node . On average edges are added (and the probability over number of edges, whose average is , is independent of the probability ), therefore the expected number of edges added to node is . Even though the probabilities for each node getting an edge are not independent, the expected number of edges added over a set of nodes, is the sum of the (see Probability theory Note 1). In particular, the expected number of edges added to all nodes with in-degree , of them (where is the degree distribution at the when there are nodes in the network (note that this changes, as we are adding nodes in the process of formation)) is: We can now write a master equation, which for is: or in words: The equation for is a bit different: where there are no nodes with degree , and there is an extra due to the node we just added. Now, taking the limit , and using the shorthand , the eq. becomes: where the terms proportional to have cancelled out. We can then solve these to get a recursion relation for with initial condition from the second equation. The solution can then be expressed in terms of Euler Beta functions, which in the asymptotic limit of large , give a power-law decay with power: . Thus, many scholars believe that this simple model may describe the fundamental mechanism by which power laws are obtained in many real-world networks. Computer simulation of de Solla Price's model See section 14.1.1 of Newman's book. Straighforward simulation of model is slow. Alternative was proposed by Krapivsky and Redner which follows the following rule With probability choose a vertex in strict proportion to in-degree. Otherwise choose a vertex uniformly at random from the set of all vertices.
The trick to do the part of choosing a vertex in proportion to in-degree is done by choosing an edge (stored in a list) with uniform probability and then choosing the vertex it points to, so that the probability of choosing is exactly proportional to how many edges point to it, i.e. its in-degree . The mathematical study of strategies for optimal decision-making between options involving different risks or expectations of gain or loss depending on the outcome. https://www.wikiwand.com/en/Decision_theory See also Machine learning, and Reinforcement learning Digital art based on Deep learning Machine-learning extrapolation of art: http://extrapolated-art.com/ Random Pics Combined Using Neural Network Neural doodle Colorize B&W pictures: http://demos.algorithmia.com/colorize-photos/ A Neural Algorithm of Artistic Style Code: code Two Minute Papers - Deep Neural Network Learns Van Gogh's Art Neural-style applied to videos too. Inceptionism: Going Deeper into Neural Networks
– Going Deeper with Convolutions How ANYONE can create Deep Style images Style Transfer for Headshot Portraits Composing Music With Recurrent Neural Networks Deep Convolutional Inverse Graphics Network Deep learning Machine learning in a modular way using layers, like in Torch. Artificial neural networks, with many layers.. Two+ Minute Papers - How Does Deep Learning Work?
–
The computer that mastered Go Oxford course (with video) on lecture 12 The idea is also that layers are recursive, i.e. layers can be made up of layers. Dropout. usefulness of dropout Artificial neural networks with many layers. Multi-scale networks and an application. Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers http://www.clement.farabet.net/research.html#parsing good for generalizing models, transfer learning, multi-task learning. Good when don't have much supervision data. Memory is good for recognizing time sequence data. See Long short-term memory. Integrating symbols into deep learning Why Deep Learning Works II: the Renormalization Group WHY DOES UNSUPERVISED DEEPLEARNING WORK?- A PERSPECTIVE FROM GROUP THEORY Books and reources Deep learning in neural networks: An overview https://deepmind.com/publications.html http://www.deeplearningbook.org/ Numenta, different approach to mainstream. See also, http://artificial-intuition.com/ http://monicasmind.com/ Find more here https://domainpunch.com/tlds/topm.php A gorgonocephalid basket stars, a relative of the brittle star.
A hydromedusa jellyfish, spotted near “Enigma Seamount” at a depth of 3,700 meters.
Degree ceremony 2015-2016 more The degree, , of a vertex, , is the number of edges connected to the vertex. For an undirected graph with vertices, it is related to the adjacency matrix by: Also the total number of edges is: as each edge has two ends ('stubs'). The mean degree, is then: . Aside: a node with a "high" degree is sometimes called a 'hub'. The number of edges in a complete (i.e. with max # of edges) simple graph can be found by counting the number of edges, where each edge represents a choice of a pair of vertices where the order doesn't matter. The number of such choices is . The density (or connectance), , is the fraction of these that are actually present: the last approx. is for large. A network is sparse if as . It is dense otherwise. These definitions make sense mathematically when one has a model for an ensemble of graphs, that can be defined for any . For an empirical network, one has to situations: For directed networks one has two types of degree: in-degree, the number of ingoing edges (sum of a row in adj. matrix) out-degree, the number of outgoing edges (sum of columns in adj. matrix). Now the total number of edges is: as each edge has one ingoing end and one outgoing end. Clearly then the mean degrees are equal: . In a weighted network, one defines the strength of a node as the weighted degree: , where is the weight matrix. What is the shortest description of an object? The size of this description, is the descriptional complexity. This general notion may also be called "structural complexity". See also Complexity theory, for other notions of complexity. Based on the minimum size of a program (interpreted by a Turing machine) that produces (describes) the object. YB videos: https://www.youtube.com/watch?v=HWsa_hZ7F3I Design is the rightful child of Art and Engineering Is anything worth maximizing? On metrics, society, and selves https://medium.com/fwd-thoughts/the-future-is-without-apps-ddf43ec52aab#.a0q670oen Generative design See Optimization Design optimization is the process of finding the best design parameters that satisfy project requirements. Multidisciplinary design optimization Topology optimization ToPy Topology optimisation (or optimization, if you prefer) using Python. More Nice example of application http://web.solidthinking.com/additive_manufacturing_design?_ga=1.67375703.779924460.1456431417 https://en.wikipedia.org/wiki/Topology_optimization 3D printing design optimization Autodesk Within https://en.wikipedia.org/wiki/Generative_Design Small objects can swim by generating around them fields or gradients which in turn induce fluid motion past their surface by phoretic surface effects. We quantify for arbitrary swimmer shapes and surface patterns, how efficient swimming requires both surface ‘activity’ to generate the fields, and surface ‘phoretic mobility’ (the quantity that determines the direction of the velocity, relative to the driving gradient, which depends on specifics of the solute/surface interactions). We show in particular that Designing phoretic micro- and nano-swimmers (pdf) Janus particle Saturn particle Three-slice design Use slender body theory Is there a way for particles to actively "fight" their rotational diffusion and make them go straight for longer, without an external field? See Loop analysis Derived using Topological trace formula gives topological polynomial, which is just the characteristic polynomial of the transition matrix Examples from fsts. See notebook and here FST 10 http://www.wolframalpha.com/input/?i=1-3z%5E2%3D0 Can analyze forward or backward FST 21 See General relativity and book by Caroll Transformation of covariant and contravariant components http://www.msri.org/summer_schools/351 http://www.msri.org/programs/286 https://www.youtube.com/watch?v=R1oU5m69ILk&list=PLIljB45xT85DWUiFYYGqJVtfnkUFWkKtP https://www.youtube.com/watch?v=JCor1st0d2E&list=PLBY4G2o7DhF38OEvEImfR2heX7Szmq5Gs An Osmotic force caused by concentration gradients. Diffusio-Osmosis of Electrolyte Solutions in Microscale and Nanoscale Osmosis is a particular case, in which the diffusio-osmosis drives liquid across a semi-impermeable membrane. See Diffusiophoresis See Brownian motion for derivations. Also Fick's laws of diffusion , where D is the diffusion coefficient, which when derived from a random walk is where is the dimension of space. The comes from the fact that walker can jump in any of two directions, per dimension. and represent the expected distance squared, and the time step in the random walk, respectively. See for example these notes, for derivation. See also a simple kinetic derivation of diffusion coefficient (in the context of solid state diffusion), see page 7
Also see Alex's notes on kinetic theory. Solutions to diffusion equation, using Fourier transform, and using Green functions. Can also derive from Fokker-Planck equation Solutions to diffusion equation for free, absorbing, and reflecting boundary conditions. https://en.wikipedia.org/wiki/Diffusion Diffusion limit on rate of reaction between molecules. Begin with an spherical particle, and assume stationary solution, . Set the concentration to be fixed to far from particle, and on its surface, as we assume they are assumed to be captured. To do general case where both particles are moving, one should use relative and center of mass coordinates (#trythis). The answer is: , where and are the particle species, and are the diffusion constants and radii. Diffusion-Limited Aggregation, a Kinetic Critical Phenomenon DLA - Diffusion Limited Aggregation Good notes about surface growth in general. Similar models: Eden growth model random animals Diffusiophoresis is the process by which particles move trough a chemical concentration gradient, due to an attractive or repulsive interaction between the particle and the chemical compound. It is a kind of phoretic mechanism of colloids. Essentially, the surface of the particle, due to Intermolecular forces (or other entropic forces), can be attracted, or repelled to the solvent molecules. If there is a gradient in the concentration of these, they can exert a net force on the particle causing it to gain a certain velocity (and the particle will exert a force on the fluid, causing it to slip over its surface). In Self-diffusiophoresis (a kind of self-propulsion), the particle itself produces the compound it interacts with. http://pubs.acs.org/doi/abs/10.1021/la00050a035 https://en.wikipedia.org/wiki/Diffusiophoresis http://link.springer.com/referenceworkentry/10.1007/978-3-642-27758-0_328-5#page-1 Theory When the thickness of the interfacial layer is thin compared to the object,
the resulting flow is most conveniently described by an effective slip velocity of the liquid past
the solid at position on the surface, proportional to the local gradient of . See a simple derivation below (from Colloid Transport by Interfacial Forces). (I think they are missing a d on the bottom in (9)) See derivation in the black notebook. Note that at the end you need to use the trick of changing order of integration, as done in Derjaguin's original paper (on page 7). The general tensorial equation is: is the local surface phoretic mobility, which depends on the particular interaction between the particle and the solven molecules (through the integral in (9)). Then using the reciprocal theorem (of Low Reynolds number flows), one can find the velocity of the particle, knowing the slip velocity of the solvent around it. In a given basis, the drift velocity of the colloid, , is then: where
is the hydrodynamic stress tensor at the surface of an object of the same shape dragged by an applied unit force,
exactly equal to , in the absence of slip. In the case of spherical particles, the drift velocity turns out to be simply: For uniform motility, , and so This simple equation for the velocity is appropriate for the Active colloids used in studying the Self-assembly of active colloids. In the derivation of equation (9) they assumed that doesn't depend on . Is this the reason why a particle without a gradient in , even if it has a gradient in its phoretic mobility is predicted to have drift velocity? Would you get a non-zero velocity if you took dependence on of into account? See Diffusiophoresis caused by gradients of strongly adsorbing solutes Diffusiophoresis: Migration of Colloidal Particles in Gradients of Solute Concentration Kinetic Phenomena in the boundary layers of liquids 1. the capillary osmosis Online photoshop-like image editor http://www.photopea.com/ Glitch art: http://www.glitchet.com/resources These
are materials in which a very small concentration (at most a
few percent) of a magnetic element, often iron or manganese,
is substituted at random locations inside a nonmagnetic metallic
host, such as one of the noble metals (copper, silver, or gold). At low densities of the magnetic atoms, their resistance, which in
normal metals decreases and eventually flattens as the temperature
is lowered, starts to rise again at a few degrees above absolute zero.
This came to be known as the Kondo effect. At higher concentrations (already at about 1%), the impurities in dilute magnetic alloys begin interacting, and they were among the first examples of Spin glasses. Percolation on a directed Network. Most of the time, it refers to percolation on a directed lattice where the direction of the edges is constrained to certain directions in the space. This is applied for instance to water percolating down a porous medium, where in this case gravity imposes a preferred direction. It is in a different universality class than undirected bond percolation. It can be described by an stochastic process similar to models describing Epidemics on networks. https://en.wikipedia.org/wiki/Directed_percolation http://guava.physics.uiuc.edu/~nigel/courses/563/Essays_2013/PDF/wolin.pdf Percolation in directed scale-free networks New universality for spatially disordered cellular automata and directed percolation Exhaustive percolation on random networks See also Dynamical systems on networks. It also has relations to sandpile models in the study Self-organized criticality Directed percolation and Sandpile models - 1 by Deepak Dhar New universality for spatially disordered cellular automata and directed percolation Directed percolation and Reggeon field theory Equivalence of Cellular Automata to Ising Models and Directed Percolation See Cellular automata Are Damage Spreading Transitions Generically in the Universality Class of Directed Percolation? Understand the different kinds of universality classes better (see Critical phenomena). Discrete dynamical systems, a.k.a. maps See Nonlinear map See Automata theory, Cellular automata, Boolean network, Dynamical systems on networks.. Great software to explore discrete dynamics: Discrete Dynamics Lab Tools for researching Cellular Automata, Random Boolean Networks, multi-value Discrete Dynamical Networks, and beyond Discrete dynamical An Introduction to chaotic dynamical systems. Second edition Chaos theory Workshop on Combinatorics, Number Theory and Dynamical Systems - Artur Avila
Abstract algebra offers the foundation of discrete mathematics. Mathematics - Discrete Mathematics See Mathematical logic, Set theory, etc. A discrete memoryless source is an Information source which is: See Markov chain Communicating classes. Set of states that can communicate with each other (which constitutes an Equivalence relation). A transition matrix where the whole state space is a communicating class Stopping time, strong Markov property. Reccurent vs transient states Let be a communicating class. Then either all states in are transient or all are recurrent. See MMathPhys oral presentation ~*Different definition of finite-state complexity here: http://web.mit.edu/cocosci/Papers/complex.pdf (though still not the one we need below) Using finite-state complexity we can define the complexity of a string produced by a finite transducer. It is effectively the length of the smallest program that describes both a transducer (according to some encoding) and the string itself. This definition is not universal as for Turing machines, because of the non-universality of finite transducers. This is not what I want, because they don't fix the transducer, while a given GPM would fix the transducer. Assuming we can use the same idea as above, if the length of the input is much larger than the length of the transducer, we are effectively inputing random fixed-length strings to a Turing machine (that we know halt hm..), and by Levins coding theorem (applied here to not asymptotic case..) we expect that "strings with many long descriptions to have a short description too". Furthermore, if we assume that the map is many to one, then each strings would have many long descriptions, so each will have a short description. But if there are many such strings, not all of them can have short descriptions. Thus, the only consistent situation is for a few strings with simple descriptions having many long descriptions too, and a many strings with few long descriptions. This assumed that the finite transducer is simple (defined by the condition above that the input string to the finite state transducer (FST) be much larger than the FST description). If it isn't, the bias argument above still holds I think, but because the transducer is complex, it's outputs will all be complex, with a complexity dominated by the transducer's. It seems like the Levin's coding theorem holds for strings of all inputs, which means it works for argument above! However, I don't understand it fully, in particular it's derivation, so I'm not too confident on this. See this book SEE EMAIL CONVERSATION FOR FOLLOW UP ON THIS. Reasoning above doesn't hold. Kamal's answer: I read the three papers - thanks for those. Shallit and Wang (2001) was not super interesting, though obviously relevant in the sense that focus on computable complexities. Calude (2011) is more interesting. The most interesting result being that a kind of Invariance Theorem holds for finite state transducers (which are weak/weakest type of computation, UTMs being the most powerful because they can compute any algorithm). The Invariance theorm in AIT comes from the fact that any UTM can simulate any other UTM, while their Inv Thm for finite state machines does not invoke this property. Assuming prefix free descriptions of the transducers, this implies a kind of coding theorem for finite state transducers. This is nice because finite state transducers do not have the mystical air that UTMs do (uncomputable). I think it is worth citing this Calude article as a comment, but maybe not making too much of a deal about it. I also looked Guillermo’s link – just a comment on some reasonsing in there (I know it is just notes): Furthermore, if we assume that the map is many to one, then each strings would have many long descriptions, so each will have a short [shorter] description. But if there are many such strings, not all of them can have short descriptions[true, but they can all have shorter descriptions]. Thus, the only consistent situation is for a few strings with simple descriptions having many long descriptions too, and a many strings with few long descriptions[hence bias in the map]. The reasoning here is a little rushed – if the map is many to one, then all outputs strings have shorter descriptions. But this does not explain why some outputs have short descriptions and some long (which leads to bias). The central thing to explain in bias is why some outputs will have shorter descriptions than others. The statement "strings with many long descriptions to have a short description too" does not say anything about how long these short descriptions are, whereas the argument presented assumes that these are short enough to be a problem, in the sense of "not all of them can have short descriptions". As a trivial but illustrative example, consider the many-to-one map from binary string of length 10 to binary strings of length 5. We can easily construct a uniform distribution for this system. According to the argument above, this system should show bias….but it does not (even though the map is simple). See Condensed matter physics, Complex systems A disordered system, I think, is defined as one which has quenched disorder that affects its behaviour. Quenched disorder affects the behaviour if the system is finite, or if the disorder is non-Self-averaging http://www.sapienza.isc.cnr.it/disordered-systems.html Free Probability, Random Matrices and Disorder – CME 510 Fluctuation-dominated Phase Ordering: Order Parameter Scaling By Mustansir Barma A dispersion is a material comprising more than one phase where at least one of the phases consists of finely divided phase domains, often in the colloidal size range, dispersed throughout a continuous phase. A continuous phase is a phase not interrupted in space. A dispersed phase is a phase constituted of particles of any size and of any nature dispersed in a continuous phase of a different composition. The dispersion medium is the matrix for the dispersed phase. The dispersion medium is the continuous phase of the dispersion. Source from IUPAC: Terminology of polymers and polymerization processes in dispersed systems (IUPAC Recommendations 2011)*. Depending on the size of the particles in the dispersed phase we have: "Dispersion", without adjective, is often used to refer to the colloidal regime. For phases with particles of colloidal size or larger. For smaller sizes, see solution. See https://en.wikipedia.org/wiki/Dispersion_(chemistry) for examples.
A quantity of interesting in Percolation theory is the distribution of sizes for the small clusters in percolation models. This can be quantified by the total number of clusters of size , . Sometimes one works with instead, to eliminate the scaling with that would make as . One can also work with the probability that a random node belongs to a cluster of size , which can be easily seen to be . This is clearly the probability of picking a node inside a cluster of size given a particular network configuration. In the case of Percolation on random graphs and networks, it's also the probability that a random network configuration (following the appropriate probability distribution defining the network ensemble) makes a particular chosen node be in a cluster of size . This is because the two operations are statistically independent. can be shown to decrease exponentially with s in the subcritical regime, and it decays more slowly in the supercritical regime. (see here). At the critical point, the cluster size follows a power law distribution (as do for instance avalanche sizes in the sandpile model at criticality). Desoxyribonucleic acid See DNA nanotechnology, MMathPhys oral presentation https://en.wikipedia.org/wiki/DNA The Shape of DNA - Numberphile. DNA is a right-handed Helix. Both strands are right-handed (almost always in biology) (they have to have the same handedness, as can be seen by looking at a cross-section and seeing the cross-sections of the strands, if they were of opposite handedness, they would collide (given the helices have the same radii)). The two strands of DNA also have a direction associated with them (the backbone determines it), and the strands are antiparalel, as seen in animation below See also Chirality in biology Can model DNA as a ribbon, and can define it's torsion. Boundaries of ribbon are the backbone, and they form the same surface that you get by twisting a normal ribbon. Can also coarse-grain more, and model it as a curve. Packing of DNA in a cell. See here How DNA unties its own knots - Numberphile, using Type II topoisomerase (see more here). Drugs that target type II topoisomerase are used as antibiotics, because this enzyme is necessary for the cell to replicate correctly in bacteria. This is because the fact that DNA is a helix, and it forms a loop in bacteria means that when DNA is unzipped by helicase, the two single strand loops are interlinked. Topoisomerase then cuts and stiches DNA in such a way as to unlink them. See DNA replication https://www.technologyreview.com/s/419590/quantum-entanglement-holds-dna-together-say-physicists/ Rapid chiral assembly of rigid DNA building blocks for molecular nanofabrication Practical components for three-dimensional molecular nanofabrication must be simple to produce, stereopure, rigid, and adaptable. We report a family of DNA tetrahedra, less than 10 nanometers on a side, that can self-assemble in seconds with near-quantitative yield of one diastereomer. They can be connected by programmable DNA linkers. Their triangulated architecture confers structural stability; by compressing a DNA tetrahedron with an atomic force microscope, we have measured the axial compressibility of DNA and observed the buckling of the double helix under high loads. Molecular Machinery from DNA: Synthetic Biology from the Bottom up Programmable DNA Nanosystem for Molecular Interrogation an embedded Förster Resonance Energy Transfer (FRET) system, in which one cyanine 3 (cy3) molecule is positioned on the frame and one cyanine 5 (cy5) molecule is on the ring, reports the relative position of the ring under various conditions Hybrid, multiplexed, functional DNA nanotechnology for bioanalysis Reversible Reconfiguration of DNA Origami Nanochambers Monitored by Single-Molecule FRET Universal computing by DNA origami robots in a living animal (see also DNA computing). Controlled Release of Encapsulated Cargo from a DNA Icosahedron using a Chemical Trigger DNA Scissors Device Used to Measure MutS Binding to DNA Mis-pairs Nanomechanical DNA origami 'single-molecule beacons' directly imaged by atomic force microscopy A DNA-fuelled molecular machine made of DNA Construction of a 4 Zeptoliters Switchable 3D DNA Box Origami Molecular Engineering of DNA: Molecular Beacons See also Atomically precise manufacturing I think this may be the article Turberfield mentioned: http://www.nature.com/nature/journal/v525/n7567/full/nature14860.html also this: http://www.nature.com/nnano/journal/v10/n9/full/nnano.2015.204.html This talks about 3D scafolded dna origami: http://www.nature.com/nmeth/journal/v8/n3/full/nmeth.1570.html Structural DNA Nanotechnology: State of the Art and Future Perspective Challenges and opportunities for structural DNA nanotechnology DNA nanotechnology from the test tube to the cell DNA origami William Shih (Harvard) Part 1: Nanofabrication via DNA Origami DNA Origami with Complex Curvatures in Three-Dimensional Space DNA bricks/tiles Complex shapes self-assembled from single-stranded DNA tiles DNA brick crystals with prescribed depths Polyhedra Self-Assembled from DNA Tripods and Characterized with 3D DNA-PAINT Three-Dimensional Structures Self-Assembled from DNA Bricks Other DNA self-assembly techniques and reviews Rational design of self-assembly pathways for complex multicomponent structures Folding DNA to create nanoscale shapes and patterns (2006, Rothemund). Complex DNA Nanostructures from Oligonucleotide Ensembles Placement and orientation of individual DNA shapes on lithographically patterned surfaces Self-assembly of DNA into nanoscale three-dimensional shapes DNA CAD Computer-Aided Design of DNA Origami Structures Computer-assisted design for scaling up systems based on DNA reaction networks DNA nanostructures: a shift from assembly to applications Single-molecule analysis Single-Molecule Mechanochemical Sensing Using DNA Origami Nanostructures Replication of DNA is a step in Mitosis Animation below is missing the steps 4 and 5: The effects of small damping, nonlinearity and forcing on a harmonic oscillator: There are potentially qualitatively different forms of the equation, depending of which combination of the parameters considered are non-zero. The Duffing Equation: Nonlinear Oscillators and their Behaviour More papers and references: https://en.wikipedia.org/wiki/Intermittency https://en.wikipedia.org/wiki/Crisis_%28dynamical_systems%29 Y. Ueda, Steady Motions Exhibited by Duffing’s Equation: A Picture Book of Regular And Chaotic Motions [[Catastrophes with Indeterminate Outcome
Stewart, H. B. ; Ueda, Y.|http://ezproxy-prd.bodleian.ox.ac.uk:2084/stable/51909?seq=1#page_scan_tab_contents]] EXPLOSION OF STRANGE ATTRACTORS EXHIBITED BY DUFFING'S EQUATION - Yoshisuke Ueda Common dynamical features on periodically driven strictly dissipative oscillators (introduces torsion and winding numbers) Comparison of bifurcation sets of driven strictly dissipative oscillators Wada basins https://en.wikipedia.org/wiki/Lakes_of_Wada Wada basin boundaries and basin cells Other link Unpredictable behavior in the Duffing oscillator: Wada basins [[Experimental investigation of the response of a
harmonically excited hard Duffing oscillator|http://www.ias.ac.in/article/fulltext/pram/068/01/0099-0104]] From here Analytical methods Exact analytical solutions for forced cubic restoring force oscillator Uses Jacobi elliptic function (only for undamped Ueda oscillator I think). A comparison of classical and high dimensional harmonic balance approaches for a Duffing oscillator Second order averaging and bifurcations to subharmonics in duffing's equation Subharmonic Oscillations in Nonlinear Systems Chaotic states and routes to chaos in the forced pendulum Organization of periodic orbits in the driven Duffing oscillator Structure in the bifurcation diagram of the Duffing oscillator superstructure in the bifurcation set of the duffing equation General case of crisis-induced intermittency in the Duffing equation for double-well Duffing oscillator. On the jump-up and jump-down frequencies of the Duffing oscillator More books: Chaos in Nonlinear Oscillators: Controlling and Synchronization
By M Lakshmanan, K Murali Antimonotonicity reversal of period-doubling cascades Spatial networks Driven matter refers to a type of bulk matter, often soft condensed matter, to which energy is being applied in a way that significantly affects some of its degrees of freedom. It is thus a driven system, in the sense of Control theory and control systems. It is closely related to Active matter. Alcohol Duffing oscillator is a nonlinear oscillator. Physical meaning The oscillator corresponds to a nonlinear spring with either hardening for or softening for (for amplitude not too large, as then it's motion becomes unbounded). The system can be integrated to obtain an energy, and the system is then a Hamiltonian system: When , satisfies: One can easily show that this is indeed a Lyapunov function and the origin is globally asymptotically stable More interesting. Nonlinear resonances. Shows chaotic behaviour, intermittency (jump phenomena), etc. See Lakes of Wada.. Treat with multiple scales method Primary resonance Secondary resonances Subharmonic Superharmonic Period-doubling cascade Reverse period doubling and reverse cascade (bubbles) Intermittency Lakes of Wada Other? https://en.wikipedia.org/wiki/Dynamic_programming Dynamical Instability in Boolean Networks as a Percolation Problem
pdf Phase Transitions in Complex Network Dynamics A connection between the percolation transition and the onset of chaos in the Kauffman model Percolation and spreading of damage in a simplified Kauffman model Activities and Sensitivities in Boolean Network Models Core Percolation and Onset of Complexity in Boolean Networks Annealed approximation: Random Networks of Automata: A Simple Annealed Approximation Boolean functions in Boolean networks are represented by a truth table, that in turn can be represented by a -length vector/string of s and s, for a -input truth table. is the number of possible inputs, i.e. the cardinality of the set . The bit string can be interpreted as a binary decision tree. The average sensitivity (when averaged over all the functions in the network) appears to be a good parameter for predicting whether the dynamics of the Boolean network are ordered or chaotic Activities and Sensitivities in Boolean Network Models Some interesting analogies, investigated via computer simulations, between percolation and properties of Kauffman Boolean networks in a 2D lattice Random Boolean networks: Analogy with percolation Connection between sensitivity and complexity of GP map of Boolean networks.. MMathPhys oral presentation Relation between Kolmogorov complexity and sensitivity of a Boolean function. Sensitivity <> constrained/unconstrained, coding/non-coding, etc. More references:
A geometrical interpretation of the chaotic state of inhomogeneous deterministic cellular automata
The role of certain Post classes in Boolean network models of genetic networks
Boolean Dynamics with Random Couplings
Isomorphism of Quasispecies and Percolation Models
Spectral theory for the robustness and dynamical properties of complex networks
Phase Transitions in Two-Dimensional Kauffman Cellular Automata
Phase transition in cellular random Boolean nets
The Physics of Structure Formation: Theory and Simulation
How things move A space (in the mathematical sense, for a continuous space, one often uses a Manifold, or a Topological space), with a Function (a.k.a. a map) that describes how a point in the space evolves (in "time"). Measure-theoretical dynamical system Continuous dynamical system are dynamical systems where the space is continuous. It is often represented as a system of 1st order O.D.Es. Linear dynamical systems (O.D.E.s linear) are easy to analyze, and can be analyzed by looking at the eigenvalues of the Jacobian. Discrete dynamical system are those where the space is discrete. They are often represented as systems of difference equations (see Nonlinear maps). Measure-theoretical dynamical system The richest class of dynamical systems are Nonlinear systems A dynamical system, whether continuous or discrete, can be partitioned (coarse-grained), so that its dynamics can be studied as Symbolic dynamics. If the system is a Probabilistic dynamical system, then the coarse-graining gives rise to a stochastic process Dynamical systems generally describe deterministic processes. Probabilistic processes are described as Stochastic processes. However, these can sometimes be described as deterministic dynamics of probability distributions, or as a probability measure over a deterministic process (i.e. a Probabilistic dynamical system). See Wiki page for good intro and different kinds Dynamical systems on complex space (particularly discrete ones): Complex dynamics Nonlinear Dynamics 1: Geometry of Chaos by Predrag Cvitanović (ChaosBook course) https://en.wikipedia.org/wiki/Floquet_theory Discrete Dynamical Networksand their Attractor Basins See Discrete dynamical systems See Mason and Gleeson tutorial article See also Temporal networks See Boolean network See Dynamical Instability in Boolean Networks as a percolation Problem Dynamics of Boolean Networks: An Exact Solution Influence and Dynamic Behavior in Random Boolean Networks Dynamics of Complex Systems: Scaling Laws for the Period of Boolean Networks. Relation between the (expected) period of a RBN and the number of nodes . Using some numerical and analytical results, they find a power law relation. What Darwin didn't know: natural variation is structured GP map bias in Boolean networks (see MMathPhys oral presentation) Guiding the self-organization of random Boolean networks (RBN). Quote from article: It is useless to enter an ontological discussion on self-organization. Rather, the question is: when is it useful to describe a system as self-organizing? [...] A model cannot be judged independently of the context where it is used. I've always agreed with this philosophy. Things like self-organizing or complex are perspectives on systems, not hard classifications schemes. Can explore RBNs with RBNLab Since RBNs are finite (they have 2 N possible states) and deterministic, eventually a state will be revisited. Then, the network will have reached an attractor. The number of states in an attractor determines the period of the attractor. Point attractors have period one (a single state), while cyclic attractors have periods greater than one (multiple states, e.g., four in Fig. 2) A RBN can have one or more attractors. The set of states visited until an attractor is reached is called a transient. The set of states leading to an attractor form its basin. The basins of different attractors divide the state space. RBNs are dissipative, i.e., many states can flow into a single state (one state can have several predecessors), but from one state the transition is deterministic toward a single state (one state can have only one successor). The number of predecessors is also called in-degree. States without a predecessor are called “Garden of Eden” (GoE) states (in-degree = 0), since they can only be reached from an initial condition. Figure 3 illustrates the concepts presented above. Fig. 3
Example of state transitions. B is a successor state of A and a predecessor of C. States can have many predecessors (e.g., B), but only one successor. G is a Garden of Eden state since it has no predecessors. The attractor C→D→E→F→CC→D→E→F→C has a period four One of the main topics of RBN research is to understand how changes in the topological network (lower scale) affect the state network (dynamics of higher scale), which is far from obvious. RBNs are generalizations of Boolean Cellular automata (von Neumann 1966; Wolfram 1986, 2002), where the states of cells are determined by K neighbors, i.e., not chosen randomly, and all cells are updated using the same Boolean function ~ ~ ~ The self-organization of RBNs can also be interpreted in terms of complexity reduction. For example, the human genome has approximately 25,000 genes. Thus, in principle, each cell could be in one of the possible states of that network. This is much more than the estimated number of elementary particles arising from the Big Bang. However, there are only about 300 cell types (attractors (Kauffman 1993; Huang and Ingber 2000)), i.e., cells self-organize toward a very limited fraction of all possible states. There are several regimes. In the critical regime near in-degree (in topological network) 2: Few nodes have many predecessors, while many nodes have few predecessors. Actually, the in-degree distribution (in state network, I think) approximates a power law (Wuensche 1998). non-equilibrium dynamical properties of Spin glasses. As we’ve already seen (and discuss more fully in
section 4.8), a spin glass in the absence of a magnetic field has zero
magnetization. But it shouldn’t be surprising that when placed
inside a uniform magnetic field, the atomic magnetic moments
will try to orient themselves along the field—as occurs in any
magnetic system—resulting in a net magnetization. So far not
very exciting; but what then happens after the field is removed or
altered? There are any number of ways in which this can be done, and
in the spin glass they all lead to somewhat different outcomes. One approach is to cool the spin glass in a uniform magnetic
field H from a temperature above T f to one well below, and
then remove the field. On doing so, the spin glass at first retains
a residual internal magnetization, called the thermoremanent
magnetization. The thermoremanent magnetization decays with
time, but so slowly that it remains substantial on experimental
timescales. Another procedure is to cool the spin glass below T f in zero
field, turn on a field after the cooling stops, and after some
time remove the field. This gives rise to the isothermal remanent
magnetization. In the simplest of these, a spin glass is cooled to a temperature
below T f in an external magnetic field, often through a deep
thermal quench. The spin glass then sits at that fixed field
and temperature for a certain “waiting time” . After the
waiting time has elapsed, the field is switched off and the decay
of the thermoremanent magnetization is measured at constant
temperature. Interestingly, the spin glass “remembers” the
waiting time: a change in the rate of decay occurs at a time
roughly after the field is removed. Aging is not confined to
spin glasses, but their unusual behaviors make them somewhat
special. all share the features of a wide
range of relaxational processes, leading to a broad distribution
of intrinsic relaxation times; a significant amount of metastability,
meaning that most relaxations, whether involving a small or large
number of spins, can only occur after the system surmounts
some energy or free energy barrier; and a consequently compli-
cated “energy landscape,” the meaning of which is discussed in
section 4.9. Ecology (from Greek: οἶκος, "house", or "environment"; -λογία, "study of" [A]) is the scientific analysis and study of interactions among organisms and their environment. ..Related to environmental studies. "It's mainly because people haven't been cutting down nearly as much wood for fuel, plus there have been concerted efforts to manage and regrow forests. Also some of the areas have been regrowing after the World Wars made them less suitable for farmland." ~Laurie Economics is the social science that describes the factors that determine the production, distribution and consumption of goods and services. It also includes the methods used for the purposeful Engineering of such processes, in a complex Society. An https://en.wikipedia.org/wiki/Economy (Greek οίκος – "household" and νέμoμαι – "manage") is an area of the production, distribution, or trade, and consumption of goods and services by different agents in a given geographical location. Economic and product cycle The economic cycle involves the product cycle, plus steps that control the product cycle, which involves systems like markets. Industry, the production of goods, often by processing raw materials (Manufacturing) Demand, use, Culture, trends, necessity, Psychology Disposal, Recycling Quaternary: Data & Knowledge, information services Quinary sector: human services Economic development correlated with an increase in the complexity of the economic activity. See also Resource management. Tax heavens https://panamapapers.icij.org/the_power_players/ https://en.wikipedia.org/wiki/Grundrisse http://motherboard.vice.com/read/the-future-of-robot-labour-has-everything-to-do-with-capitalism Bet-hedging: What is bet-hedging, really? Bet-hedging as an evolutionary game: the trade-off between egg size and number Bet-hedging theory addresses how individuals should optimize fitness in varying and unpredictable environments by sacrificing mean fitness to decrease variation in fitness Evolution of phenotypic robustness Genome Growth and the Evolution of the Genotype-Phenotype Map Fundamental Properties of the Evolution of Mutational Robustness Evolvability and robustness in artificial evolving systems: three perturbations SELF-ASSEMBLY , MODULARITY , AND PHYSICAL COMPLEXITY pdf presentation See MMathPhys oral presentation –The structure of the genotype–phenotype map strongly constrains the evolution of non-coding RNA. See notes Probabilistic bias in genotype-phenotype maps. See more here: http://dingleresearch.weebly.com/publications.html Self-assembling polyominoes model: A tractable genotype–phenotype map modelling the self-assembly of protein quaternary structure More.... Modeling the evolution of molecular systems from a mechanistic perspective Adaptive dynamics under development-based genotype–phenotype maps Why self-incompatibility in the Brassicaceae is totally cool 3. The organization of biological sequences into constrained and unconstrained parts determines fundamental properties of genotype–phenotype maps. Features observed in several GP maps (including the simple Fibonacci GP map they use as a model): random null model: that maintains the number of genotypes mapping to each phenotype, but assigns genotypes randomly Genetic correlations neutral correlations can be quantified by the robustness to mutations, which can be many orders of magnitude larger than that of the null model, and crucially, above the critical threshold for the formation of large neutral networks of mutationally connected genotypes which enhance the capacity for the exploration of phenotypic novelty. Thus neutral correlations increase evolvability. non-neutral correlations: Compared to the null model: Non-neutral correlations of type i) and ii) reduce the rate at which new phenotypes can be found by neutral exploration, and so may diminish evolvability, while non-neutral correlations of type iii) may instead facilitate evolutionary exploration and so increase evolvability. suggesting that some of the results discussed in this paper for
RNA may hold more widely in biology See also Evolving automata Paper with several examples of GP maps, including cellular automata map: An investigation of redundant genotype-phenotype mappings and their role in evolutionary search See Measures and metrics for networks The eigenvector centrality (first defined by Bonacich in 1987), is defined by: where is the vector of centralities, and is the largest eigenvalue of . The reason we choose the largest eigenvalue is that this measure can be obtained by starting from any arbitrary centrality measure and getting new centrality measures by requiring that they be equal, for each node, to the sum of centralities of its neighbours, then the centrality corresponding with the eigenvector with the largest eigenvalue emerges exponentially over the others, and in the limit, we get the centrality defined above (up to normalization). The centrality, then, has the property that it is equal to the sum over centralities of neighbours for each node : .....Eq. 1 so that a node can be important because it is connected to many nodes, or because it is connected to important nodes, or both. Eigenvector centrality has problems for directed networks because defined in the natural way, only vertices in strongly connected components (or their out-components) will have non-zero eigenvector centrality. This is because the map described by Eq.1 passes centrality along edges in the direction they point, so the in-component will "loose" all its centrality in the long time limit. Katz centrality addresses these problems ~ Need strongly connected for a directed network. Perron-Frobenius theorem This theorem is related to ergodicity of the map defined by the recursive relation used to define eigenvector centrality [write it here]. [Look at theorem stuff in Newman books, specially relevant footnotes]. Ensures centralities are positive.
Self-propelled particle, Self-electrophoresis, Catalytic conductor-insulator Janus swimmer Electrokinetic effects in catalytic platinum-insulator Janus swimmers "Pt-insulator Janus particles, the absence of conduction between the two hemispheres suggests a mechanism indepen-
dent of electrokinetics." (referring to the mechanisms that involve movement of electrons in bimetallic swimmers, see Self-electrophoresis). Thus Self-diffusiophoresis was suggested. However, as they show in that paper, some electrokinetic effects can still play a role in the Pt-insulator Janus particles. "We find that their motion is due to a combination of neutral and ionic diffusiophoretic as well as electrophoretic effects whose interplay can be changed by varying the ionic properties of the fluid. " One of their main findings is that a gradient of catalyst is required to produce appreciable propulsion velocity for single metal catalytic swimmers. Main mechanisms of the electrokinetic effect To see the main mechanism of the effect they discover (the mathematical derivation is outlined in the paper), notice that at the pole the catalytic reaction happens faster, and so there is a higher or lower concentration of pairs depending on whether the reaction is mostly consuming or producing them (see reaction diagram). Notice that the electrons () diffuse much faster inside the Pt metal, so that they spread through the Pt hemisphere, while the proton ions () diffuse much slower. Note that the electrons will diffuse in such a way that the tangential component in the metal is . This distribution of charges creates an electric field, that drives the ions in the fluid, propelling the Janus sphere. In the case in the paper, I think the place where the reaction happens faster (near the pole) also consumes faster, so there is a depletion of there, and a relatively higher concentration near the equator. There is thus a net electric field that pushes the protons from the equator to the pole (i.e. they push each other). They drag the fluid with them too, so that the particle propels itself by this self-electrophoretic mechanism. See also Ion Drive for Vesicles and Cells See Colloid Transport by Interfacial Forces for matched asymptotic analysis of fluid flow. And see paper for chemical reaction kinetic and diffusion equations. Why does the double loop topology mean we can reduce overall catalytic reaction rate without significant reduction of colloid velocity? https://www.youtube.com/channel/UCPC6uCfBVSK71MnPPcp8AGA/playlists https://en.wikipedia.org/wiki/Electrophoresis https://en.wikipedia.org/wiki/Zeta_potential https://en.wikipedia.org/wiki/Double_layer_(surface_science) Induced-charge electro-osmosis Induced-charge electrokinetics See Electrostatics See book by Hunter - Foundations of colloid science. Individual and collective behavior of artificial swimmers: "Janus particles" See this post: https://www.facebook.com/groups/hedonistic.imperative/permalink/10152547241106965/ and movie Phenomenon (1996) Dave says: "it is not emotions we need to control but behaviour. We do not learn from emotions by curbing and suppressing them but by fully experiencing them. " When Emotions Make Better Decisions - Antonio Damasio Hm, it seems like emotion is our Q function in Reinforcement learning. It is kind of a summary of wisdom from past experiencies. Hm this is interesting.. If we are guided by emotions too much then our Q function will learn by trying to amplify the positive emotions it encodes, this may produce a positive feedback loop, which sounds like addiction to me. If however, we ignore emotions too much, we are not making use of this awesome machine learning algorithm we have built in in our brain, and may get stalled in philosophical analysis too often in life, by trying to logically deduce everything. In fact, modern Artificial intelligence trends seem to show that deep learning, and heuristics based learning are more powerful than the older symbolic/logic approach to AI. However, judging from how our brain works, it appears that the optimal combination may be a combination of the two, using one or the other as appropriate! Antonio Damasio's research in neuroscience has shown that emotions play a central role in social cognition and decision-making" This seems to be related to thinking fast & slow (Read that book!), and also how AIs now seem think more intuitively (so maybe in a sense they have some level of emotion now!). See this to see how these considerations of thinking fast & slow, heuristically vs deductively, relates to utilitarian ethics issues: Facing the unknown: the future of humanity - Nick Bostrom Not sure. Hm, of course, this is just a fuzzy representation, but I think I would swap the terror and amazement branch. It'd be interesting to see the logic behind this better though. Producing energy doesn't mean creating it from nothing, as that would violate the principle of conservation of energy, in Physics. Energy production thus refers to converting energy from one form (often an storage form) to another form, which is useful (to do mechanical work often). Technically using the Sun as energy source/fuel. Here we include applied sciences as part of engineering Problem-solving strategies TRIZ - Theory of inventive problem solving. Apparently used by Samsung Free MIT books: https://archive.org/details/mitlibraries The entropy rate of an information source (see Data transmission) is the average entropy of a letter of the source. An information source is often modelled as a discrete-time stochastic process , where each is called a "letter". The entropy rate is then defined as: when the limit exits (see also Shannon-McMillan-Breiman theorem). Chapter 2 Information Measures - Section 2.10 Entropy Rate of a Stationary Source One can define a related measure, , by using conditional entropies. It can be shown that, for an stationary Information source, the entropy rate exists and is equal to . The Entropy rate of a FSP, as defined in Finite state channel See here: http://pfister.ee.duke.edu/thesis/chap4.pdf Blackwell (1957): The entropy of functions of finite state Markov chains Birch (1962): Approximations for the Entropy for Functions of Markov Chains Ordering in sequence spaces A mathematical theory of ordering (with constraints) in sequence spaces was first presented in [7] and [1]. In their setup, an algorithm is sought which “orders” any sequence of length n, i.e., which transforms the sequence x⃗ into the sequence y⃗ (of the same length and with the same symbols in it), such that the number of possible resulting sequences y⃗ is as small as possible. In this sense ordering is a generalization of sorting x⃗ , as this would yield the absolute minimal number of sequences y⃗ . Ordering in Sequence Spaces: An Overview Creating order in sequence spaces with simple machines Entropy reduction, ordering in sequence spaces, and semigroupss of non-negative matrices see here Often defined for a (probabilistic) Information source. Here they define a (non-standard) notion of entropy for a specific sequence. Topological entropy of a string (symbol sequence) measure-theoretical or
Kolmogorov-Sinai entropy See Entropy and complexity of finite sequences as fluctuating quantities https://en.wikipedia.org/wiki/Enzyme are macromolecular biological catalysts Michaelis-Menten rule Derived from kinetic rate equations for a simple catalytic reaction. The rate (per unit volume) of catalysis at equilibrium is: Derivation Keywords: Network science, Epidemiology Cascades on Networks. There Watt's cascade model is described, among other things. See Mason and Gleeson "Dynamical systems on networks" For Simple contagions, a node can get infected by simple exposure to another infected node (possibly with a certain probability or rate). These are mostly compartmental models, and their extensions are used to model mostly biological contagions (like infectious diseases), as well as some IT contagions (like computer viruses) For Complex contagions, nodes get infected by more complex processes, often involving several other nodes. These are often used to model more complicated social contagions and phenomena. See Social dynamics See also wiki page: Complex contagion There are many Epidemic model. Some use simple stochastic compartmental models based on a Master equation (see Simple contagion). See Epidemics on networks, for models that include the underlying network structure. For Simple contagions, a node can get infected by simple exposure to another infected node (possibly with a certain probability or rate). These are mostly compartmental models, and their extensions are used to model mostly biological contagions (like infectious diseases), as well as some IT contagions (like computer viruses) For Complex contagions, nodes get infected by more complex processes, often involving several other nodes. These are often used to model more complicated social contagions and phenomena. See Social dynamics See also wiki page: Complex contagion The theory of Knowledge. What is the nature of knowledge? What are the obstacles to the attainment of knowledge? What can be known? How does knowledge differ from opinion or belief? Statistical Mechanics Lecture notes (Oxford Maths) Statistical Mechanics Lecture notes (Oxford Physics) Can formulate as: Ergodic theory (Ancient Greek: ergon work, hodos way) is a branch of mathematics that studies dynamical systems with an Invariant measure and related problems An https://en.wikipedia.org/wiki/Ergodic_theory Recent Trends in Ergodic Theory and Dynamical Systems Karma Dajani - An introduction to Ergodic Theory of Numbers (Part 1) See Coding theory Forward error correction: forward error correction (FEC) or channel coding[1] is a technique used for controlling errors in data transmission over unreliable or noisy communication channels, where the information flows only one way (see here).. The central idea is the sender encodes the message in a redundant way by using an error-correcting code (ECC). The American mathematician Richard Hamming pioneered this field in the 1940s and invented the first error-correcting code in 1950: the Hamming (7,4) code.[2] Two way error correction. Things like ARQ (automatic repeat request). Main types of FEC codes: See http://pfister.ee.duke.edu/thesis/chap1.pdf, and other chapters. (IC 1.3) Applications of Error-correcting codes Solidity -
Solidity docs -
Solidity Browser - Cosmo (Doesn't really work) Local node - Setting up private network - How To Create A Private Ethereum Chain Tutorial for a contract example: https://ethereum.org/token#the-code Meteor template: https://github.com/SilentCicero/meteor-dapp-boilerplate http://www.ethereumoxford.org/tutorials/Tutorial3.html The Wallet: https://github.com/ethereum/mist/releases Evolution (wiki) is a positive feedback loop: it's all about changes that perpetuate those changes. Whenever you change a gene in such a way that it makes that gene more likely to stick around. But it doesn't need to be a gene. You can make self-sustaining cultural changes, like memes, self-fullfilling prophecies. Of course, positive feedback loops are found in maaany places, and they are indeed one of the main causes of self-organization in complex systems, so it is nice to see that evolution is just an example of one. Dawkins idea of replicators (see his article) of course fits well, because replicators are just self-sustaining structures. See also Units of evolution: A metaphysical essay and this, and The Elementary Units of Heredity cited in his article) See MMathPhys oral presentation, Evolutionary computing, Genetics Read book: Dawkins - The extended phenotype Evolutionary Dynamics- Exploring the Equations of Life - by Martin A. Nowak
slides
website
Evolutionary dynamics on graphs
djvu History of evolutionary thought Theoretical evolutionary genetics - Felsenstein (book), pdf ON THE FORMALIZATION OF THE EVOLVING TRANSFORMATION SYSTEM MODEL Evolutionary Theory and Mathematics Mathematical Modeling of Evolution “The arrival of the fittest”: Toward a theory of biological organization Kimura's neutral theory of evolution. He proposed that (at least for molecular evolution) most mutations are neutral, meaning that they don't lead to a change in fitness. Bias in GP maps, Arrival of the frequent See Genetics Genotype-phenotype map, Bias in GP maps Discovery of a fundamental limit to the evolution of the genetic code Scientists discover the evolutionary link between protein structure and function Some older disorganized thoughts: Replicators at different levels. Multilevel selection may not be necessary. However, it may be useful, it is just different ways of looking at evolution at different levels, depending on which processes are most important: mostly which (approximate) replicators are being looked at Group, kin, individual, gene etc selections are just different proximate/ultimate levels of causation on the same evolutionary process People https://en.wikipedia.org/wiki/Ernst_Mayr See in wiki article of evolution https://en.wikipedia.org/wiki/Evolutionary_computation See Evolution Computational intelligence - Scholarpedia Evolution of evolvability Slides Complexity compression and evolution https://en.wikipedia.org/wiki/Genetic_programming https://en.wikipedia.org/wiki/Gene_expression_programming See Holland's work. For e.g. Holland, J. H. (1992). Adaptation in Natural and Artificial Systems, MIT Press, Cambridge MA. Three Elements of a Theory of Representations Redundant Representations in Evolutionary Computation
As a result, uniformly redundant representations do not change the behavior of GAs. Only by increasing r, which means overrepresenting the optimal solution, does GA performance increase. Therefore, non-uniformly redundant representations can only be used advantageously if a-priori information exists regarding the optimal solution. Bias towards simplicity (see MMathPhys oral presentation) similar to regularization in Machine learning? https://en.wikipedia.org/wiki/Evolvable_hardware Whatever happened to evolvable hardware? https://en.wikipedia.org/wiki/Reconfigurable_computing Automated Antenna Design with Evolutionary Algorithms Logos sotware from MIT for agent-based simulation and others Conway's game of life http://www.scholarpedia.org/article/Game_of_Life Automata theory, cellular automata. Smooth cellular automata: https://www.youtube.com/watch?v=KJe9H6qS82I Life in life, meta Benefits of Sexual Reproduction in Evolutionary Computation See MMathPhys oral presentation. [[Automata theory http://link.springer.com/chapter/10.1007/978-3-642-23780-5_20#page-1 http://www.sciencedirect.com/science/article/pii/S0031320305000294 http://www.mitpressjournals.org/doi/abs/10.1162/neco.1992.4.3.393#.V5JBI-02fCI http://www.mitpressjournals.org/doi/abs/10.1162/neco.1989.1.3.372#.V5JBB-02fCI Simplicity bias in finite-state transducers Evolving Finite State Machines with Embedded Genetic Programming for Automatic Target Detection Learning Finite-State Transducers: EvolutionVersus Heuristic State Merging Boolean network and their evolution (What Darwin didn't know: natural variation is structured). Introducing Domain and Typing Bias in Automata Inference An Automaton Approach for Waiting Times in DNA Evolution Also, genetic regulatory networks: Highly designable phenotypes and mutational buffers emerge from a systematic mapping between network topology and dynamic output, Evolvability and robustness in a complex signalling circuit Ergodicity of Random Walks on Random DFA On the Effect of Topology on Learning andGeneralization in Random Automata Networks Quantifying the complexity of random Boolean networks The state complexity of random DFAs http://tuvalu.santafe.edu/~walter/AlChemy/alchemy.html Artificial chemistry Is a random transducer an appropriate random model for GP maps in Nature? For instance, in Gene regulatory networks, when modelled as random Boolean networks, the state transition network is probably not just a random transducer... Though maybe it depends in the regime. For instance, in the critical regime we observe the largest GP map bias apparently See Simplicity bias in finite-state transducers You need to be able to loop around the non-coding region, and around the coding region to get non-trivial designability/complexity plots. This FST shows a good example of an approximately absorbing region with two non-coding states. The fact that the region is approximately non-absorbing, and as there is a cycle outside that region, means we will get variety in output. FST table: In this example there is clear bias towards a sequence, as there is an absorbing region made entirely of -noncoding states. However, the rest of the fst does not have any loop, so there's barely any possibility for variety of outpus, and the designability/complexity plot is trivial. Here is an example of a FST with an approximately absorbing region with non-coding states that is the whole fst. suggesting that some of the results discussed in this paper for
RNA may hold more widely in biology See also Evolving automata Paper with several examples of GP maps, including cellular automata map: An investigation of redundant genotype-phenotype mappings and their role in evolutionary search Percolation processes that show a discontinuous, or at least very steep phase transition. See this image for a nice sumary of types of explosive percolation processes. The reviews below also summarize results, and below we discuss some of the main types. Explosive Percolation: Novel critical and supercritical phenomena Impact of single links in competitive percolation Achlioptas processes follow -edge rules which involve choosing candidate edges uniformly at random between any pair of nodes (compare with other Spanning cluster-avoiding process)and applying a rule to select which one is actually chosen. These have been proven to be continuous in the thermodynamic limit, for a fixed Processes based on chosing vertices at random and adding edges among those vertices according to some rule. -vertex rules are actually a generalization of -edge rules. Half-restricted process is a variant of the Erdős–Rényi process which exhibits a discontinuous phase transition. Explosive Percolation in Erdős-Rényi-Like Random Graph Processes In each step, two vertices are connected by an edge, but one of them is restricted to be within the smaller components (more specifically defined to be a set composed of a given fraction, , of the total nodes chosen in ascending order of {the size of the component they belong to}. This is also called the restricted vertex set, ). note that the restricted vertex set is recalculated after every step, as the clusters have changed. This process exhibits a discontinuous percolation transition for any An spanning cluster-avoiding process (SCA) is an Explosive percolation model based on classifying bonds between those that facilitate the creation of the spanning-cluster, and those that don't, and preferentially selecting those that don't. They are similar to Achlioptas processes (-edge processes). However, they don't require the candidate edges to be chosen at random between any pair of nodes, and instead the candidate edges can belong to a predetermined underlying network, common a hypercubic lattice. They are capable of showing discontinuous transitions, for certain choices of the number of candidate edges chosen per step I think there should be a term used for -edge-like processes, that have an underlying network.. See Models of network formation Models with extra edge addition Model can consist of the BA model, but with an extra process carried out at each step. A given number of edges is added to the network between two nodes with a probability proportional to their degree. One can again construct a master equation, and get a power law degree distribution. Similar models exist that generalize the Price's model instead of BA. Edge removal Simple model: at each update step we remove edges chosen uniformly at random from set of all edges. The probability that node looses an edge connected to it, for each of these removals, is . This is because randomly choosing an edge means randomly choosing a pair of stubs, and will loose an edge when either of these randomly chosen stubs coincides with one of the stubs incident to . The probability of this happening for each of the randomly chosen stubs is , and the probability that either stubs is from is . However the is because the BA network formation model doesn't allow self-edges to form. Therefore we are left with , as in Newman's book. Models with edge addition and removal One can also combine the two models above. The master equation in this case, becomes more complicated, because now depends on both and . Generating function methods need then to be used. Se Newman section 14.4.2 or the paper Exact solutions for models of evolving networks with addition and deletion of nodes for detailed calculation, and a power law degree distribution is still obtained (though with different exponent of course), as long as edge removal rate is not too high. One can also do analogous for removal and addition of nodes. Non-linear preferential attachment Attachment probability may depend nonlinearly on degree, i.e. we have a nonlinear attachment kernel. One can still derive an asymptotic form of the degree distribution for the case , of interest because empirical networks have shown this form of preferential attachment. For , the degree distribution is no longer a power law, but an "stretched exponential" of the form: This function decays slower than exponential because . There are also similar but more complicated expressions for other in the range . One can also calculate the case for superlinear preferential attachment with . In this case it turns out that the behaviour is that a "leader" emerges in the network, gaining a non-zero fraction of all edges, asymptotically, with the rest having degree less than some fixed constant. See here for more. Nodes with inherent fitness Inherent fitness aka attractiveness may vary across nodes in the network. See Bose-Einstein condensation in complex networks and Competition and multiscaling in evolving networks for a model. In it a fitness value, , is assigned to each node (sampled from a given distribution ), and is unchanged thereafter. Now, the attachment kernel depends on as well: . The same method used as for the section Degree distribution as a function of time of creation above can be used (with instead of ), and a solution can be analytically obtained for the case , and a power law distribution is obtained for each , but not overall, as it depends on what is. In Bose-Einstein condensation in complex networks, they show an interesting effects that happens for some choices of , where a few nodes (a constant number of them, so as a fraction, they go as and go to 0 as and so don't appear in ) have a degree proportional to , and so do contribute to quantities like . This is analogous to Bose condensation. However, it is not known which will produce condensation, and computer simulations suggest, that whether condensation occurs or not may depend on the fluctuations and thus not be deterministic (see Polya's run; is this at all related to Ross–Littlewood paradox? There are also interesting work on the statistics of the node with maximum fitness (which changes more and more rarely as a higher value of is sampled at some updates). These follow so-called record dynamics Slow dynamics from noise adaptation. More relevant review articles: Statistical mechanics of complex networks Some features that are important in the behaviour of an evolving system. Important features: A fibrous material is any material system formed by fiber-like constituents such as felt, cloth, paper, muscle and wood. A filter on is a family of subsets of such that: (a) ; (b) is algebraically closed under finite intersections; (c) is an upper family. An upper family refers to a family of subsets, which is an Upper set w.r.t. the Lattice of subsets of , that is if a set is in the family, then any subset of that set is also in the family. See also Filter base A filter base is a family of non-empty subsets of a Set such that if then there exists such that . This can be used to construct a Filter (Topology): This notion can also be extended so that a family of filter bases (which we call a base, or a basis) generates the filters forming the Neighbourhood structure of a Neighbourhood space, or of a Topological space. For a topological space, the arbitrary unions of set in the filter base can be considered to generate the open sets A filter base can in turn be generated by a Filter subbase A filter subbase can generate a Filter base, and like it, it can be extended so that a family of filter subbases (which we call subbase) to generate a whole Topological space. Note that the sets forming the subbase are part of the base they generate, because finite intersections include the intersection of a set with itself. In Information theory, and in particular, Data transmission, a finite state channel (FSC) is a discrete-time channel where the distribution of the channel output depends on both the channel input and the underlying channel state. This allows the channel output to depend implicitly on previous inputs and outputs via the channel state. The channel can be modelled as a stochastic Finite-state transducer. See here for more: http://pfister.ee.duke.edu/thesis/chap4.pdf Entropy and Mutual Information for Markov Channels with General Inputs Blackwell, Breiman, and Thomasian introduced indecomposable FSC (IFSC) in [7] and proved the natural analogue of the channel coding theorem for them. Birch discusses the achievable information rates of IFSCs in [5], and computes bounds for a few simple examples. Similar to the mapping between Boolean lattices and directed percolation. See Relations between the stability of Boolean networks and percolation. See Markov chain In the thesis he considers a Markov input process as Information source Combining the Markov Input Process and the Finite State Channel, gives a new Markov process over the states given by the cross product of the states of the channel and of the input. They label this new set of states by integers too. This combined process is what they call a Finite state process (FSP). Capacity of finite state markov channels with general inputs A Randomized Approach to the Capacity of Finite-State Channels Capacity, mutual information, and coding for finite-state Markov channels See Automata theory Finite number of states, transitions between them are followed according to sequentially read (a.k.a. on-line) input string. Formally, a finite automaton on an alphabet is a tuple , where is the set of states, the subsets of , and , which are the set of input and final states, respectively. is the set of edges between states, labelled by a letter in the alphabet. The transition encoded by the edge is performed, when the automaton reads the letter, while being at the first state in the transition. Deterministic machine
Reversing deterministic machines Non-determinstic finite state machines can have more than one transition that may be done when reading a certain input symbol on a state. They may also have epsilon transitions which can be done without reading a symbol. A string is accepted if there is at least one path through the machine that ends in an accepting state Determinsitic equivalent in power to non-determinstic. But non-deterministic sometimes much easier to think with. Convert non-determinsitic machine to deterministic machine Equivalence between non-deterministic and deterministic machines is the key in proving that regular sets are closed under reversal. To construct a FSM that accepts the complement of a regular set, just swap accepting and non-accepting states. Why are regular sets called regular? he uses a nice heuristic explanation of the pumping lemma Build an fst on the web: http://madebyevan.com/fsm/ The Fisher information matrix (FIM) is the Hessian of the Likelihood function. If one Taylor expands the log-likelihood around a maximum, and keeps only terms up to second-order, we are approximating the peak by a Gaussian peak, and this is what is done to find the FIM The Covariance matrix is the inverse of the Fisher matrix. can be calculated as , where is the FIM, and is a small step in parameter space from the maximum of the likelihood. faster if expansion sequence is unknown (i.e. we don't know it it's a power series or a log series for instance); slower, if the expansion sequence is known. For example to find roots of an equation we need to express it as: where is the solution we're looking for. Then starting from a guess (which if possible should be chosen to be the solution for , so that the solution is right to order 1 at least.), then we iterate: and the iterations should get better if (prime = derivative), and is suitably chosen. However, to get asymptotic expansion we actually require as . In particular, if , one gets one term in a power-series expansion, per iteration, as can be seen from argument in notes, where we see that the difference between true answer and answer gets multiplied by at every iteration. If we don't know the order of , the way to check if the iteration is right up to some order is to try one more iteration and seeing if the term changes (Though I don't think that's definite proof). The usual procedure is to place the dominant term of the equation on the side (i.e., the side that will give the new value), so that it can be calculated as a function of the terms on the side (i.e., the previously-obtained value). As we will see later, the identity of the dominant term can be adjusted by scaling. I think we place the dominant term of the equation on the side because that ensures we choose that term to be right to first order in the 0th iteration, and so the equation is right to first order. In the simple example of , which comes from , we selected the term, if we had selected the , we would have to divide by and the case would not be well defined, indicating that we want to get the dominant term right in the equation. Another way to look at it, is dominant balance, by putting the dominant term on the LHS, the approximately expresses dominant balance! For the iterative method, different functions may be needed to find different perturbed roots of an algebraic equation, so that condition as is satisfied. The proof that this method works is based on a Fixed-point theorem, in particular on the contraction mapping theorem, also used proof Fractals are well defined. See more at Fixed-point iteration If |gradient|<1, iteration doesn't converge: A piece of equipment or furniture that is fixed in position in a building or vehicle. Fluid dynamics is the branch of Fluid mechanics that describes the causes of motion, i.e. the forces and torques that can affect fluids, and how these affect their motion. The equations of fluid dynamics can be derived from the principles of Mechanics (in particular continuum mechanics). More recently they have also been derived from the microscopic statistical picture of moving and interacting particles thanks to the development of Kinetic theory. Navier-Stokes equation https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_equations Oxford course. Bachelor's book, etc. https://www.youtube.com/watch?v=pqWwHxn6LNo&list=PL0EC6527BE871ABA3&index=2 https://en.wikipedia.org/wiki/Strain_rate_tensor See table I made for 3rd year revision of heat, particles, etc. https://en.wikipedia.org/wiki/Convection%E2%80%93diffusion_equation Fluid kinematics is the branch of Fluid mechanics that (just like kinematics, in Mechanics) describes the possible motion of fluids. Flow can be decomposed into: Pure shear is a combination of rotation and strain. The branch of Mechanics that deals with the motion and the forces that affect fluids A fluid is a piece of matter that has no, or negligible, elasticity. This means it flows under virtually any applied force. There are three main phases of matter that are fluid: More complex fluid phases, often composed of mixtures are called complex fluids. Fluid dynamics describes the causes of motion, i.e. the forces and torques that can affect fluids, and how these affect their motion.
Magnetohydrodynamics and Electrohydrodynamics describe the dynamics of an electrically conductive fluid. Fluid kinematics (just like kinematics, in Mechanics) describes the possible motion of fluids. Deriving FP eq from Langevin equation. Fokker-Planck equation works for Markov processes in space, so it is derived from the Langevin equation that ignores inertia. where: Detailed balance and equilibrium Setting and , and using Einstein's relation, we get Boltzmann Distribution. N-non-interacting particles We get Smoluchowski equation. N interacting particles We get BBGKY hierarchy, as in Kinetic theory Backwards Fokker-Planck equation Tells you how likely different initial conditions is to arrive at a certain fixed point in the future. First-passage time Calculation of the mean time required to leave a region. Kramers rate theory The rate at which fluctuations push particles over a barrier. Survival probability
Crucial argument: reflecting parts of the trajectory leaves same probability See also here for nice derivation from boundary conditions Stationary solution of 1D FP equation Assume a periodic potential with a bias: and assume the solution is periodic: This is not the equilibrium solution (which would be an exponential growing P to compensate bias, just as the exponential growth of density in gravity or constant electric field). Therefore even though it is stationary . If we integrate this from to taking this periodicity into account: The easiest way to calculate escape time from one well to the next is to assume there is one particle per well: The average drift velocity is . Fluctuation-driver transport Analogous to AC rectification in diodes! Quantum mechanical analogy See video, and the lecture notes! Also applicable in Path integrals for stochastic processes Stochastic quantization and path integral formulation of Fokker-Planck equation https://en.wikipedia.org/wiki/Molecular_gastronomy http://genomicgastronomy.com/about/ Soylent, Joylent, Huel, Nano
http://www.mealsquares.com/ (essentially solid soylent. See if there is an EU version. Otherwise, good idea for startup lol!) https://www.ketosoy.com/blogs/news/results-of-the-2016-soylent-eaters-survey 3D printed food. Relations to compilers, parsers, etc. Grammars, etc. A nice new language for this: Ohm See Automata theory, GKeep notes. Chomsky hierarchy. (see also Theory of computation). Mathematics - Formal Languages and Automata Theory Languages, grammars, etc. (Abstract) Rewrite systems A set of objects, and a binary operation, , that tells us how we are allowed to transforms expressions. If these rules act on terms out of which an expression can be built, then this is a term rewrite system. They are non-deterministic Markov algorithms, and they are Turing complete. They are related to normal forms, lambda calculus, and combinatory logic See Complex systems. Related to Discrete dynamical system, and Symbolic dynamics Synopsis: Fractals for Sharper Vision See Limits and infinity Lecture Notes on Fractals, Iterated Function Systems, and Related Topics (updated, 5/02/16) See notes Iterated function systems and the code space Codes as fractals and noncommutative spaces Coasts are fractals. Here's a perfect fractal coast Frontend web development https://medium.freecodecamp.com/angular-2-versus-react-there-will-be-blood-66595faafd51#.4bc9n0ott ReactJS seems better See Voxel.css for Minecraft-like stuff in browser Graphics and visualization ~ ~ ~ http://fortawesome.github.io/Font-Awesome/ Webgl See chromeexperiments website Nice 2D webgl lib: http://www.pixijs.com/ voxel.css http://codepen.io/sha99y8oy/pen/GZZXyL http://www.effectgames.com/demos/canvascycle/ HTML presentations: impress.js, reveal.js, deck.js See here: https://musiclab.chromeexperiments.com/Technology For microphone input: https://en.wikipedia.org/wiki/WebRTC For accelerometer, gyroscope input (from phone for eg) see chromeexperiments For microphone input: https://en.wikipedia.org/wiki/WebRTC A type of Relation between two Sets, such that for each element belonging to the set called domain, there is a unique element belonging to the set called co-domain. John Klauder - Lectures on Functional Integration Some Recommended Books G. Roepstorff, "Path Integral Approach to Quantum Physics", Springer-Verlag, Berlin, 1996 R. Feynman and A. Hibbs, "Quantum Mechanics and Path Integrals", McGraw-Hill, New York, 1965 A.V. Skorokhod, "Studies in the Theory of Random Processes", Addison-Wesley Publishing, Reading, Massachusetts, 1965 B. Simon, "Functional Integration and Quantum Physics", Academic Poress, New York, 1979 L. Schulman, "Techniques and Applications of Path Integration", John Wiley & Sons, New York, 1981 J. Klauder and B-S. Skagerstam, "Coherent States", World Scientific, Singapore, 1985 C. Grosche and F. Steiner, "Handbook of Feynman Path Integrals", Springer-Verlag, Berlin, 1998 J. Klauder, "Beyond Conventional Quantization", Cambridge University Press, Cambridge, 2000 H. Kleinert, "Path Integrals in Quantum Mehcanics, Statistics, and Polymer Physics", 3rd Edition, World Scientific, Singapore, 2003 Introduction to Functional Programming youtube videos Functional programming languages The syntax is so nice. As he says in the vid, there is basically no syntax. It also reminds me of the data structures used for CASs Lisp Scala. yt vids Scheme Haskell. http://learnyouahaskell.com/ Functional programming in JavaScript Higher-order functions - Part 1 of Functional Programming in JavaScript http://elm-lang.org/
http://cycle.js.org/
https://baconjs.github.io/ (also reactive) Wikipedia:Portal/Directory/Sports and games https://en.wikipedia.org/wiki/Game#Definitions Computer game designer Chris Crawford, founder of The Journal of Computer Game Design, has attempted to define the term game[8] using a series of dichotomies: Creative expression is art if made for its own beauty, and entertainment if made for money.
A piece of entertainment is a plaything if it is interactive. Movies and books are cited as examples of non-interactive entertainment.
If no goals are associated with a plaything, it is a toy. (Crawford notes that by his definition, (a) a toy can become a game element if the player makes up rules, and (b) The Sims and SimCity are toys, not games.) If it has goals, a plaything is a challenge.
If a challenge has no "active agent against whom you compete," it is a puzzle; if there is one, it is a conflict. (Crawford admits that this is a subjective test. Video games with noticeably algorithmic artificial intelligence can be played as puzzles; these include the patterns used to evade ghosts in Pac-Man.)
Finally, if the player can only outperform the opponent, but not attack them to interfere with their performance, the conflict is a competition. (Competitions include racing and figure skating.) However, if attacks are allowed, then the conflict qualifies as a game. In particular, video games, and computer games.. But generally, any Games https://www.unrealengine.com/what-is-unreal-engine-4 Unity 5 Minecraft Mods Quantum one made by MIT See Voxel.css for Minecraft-like stuff in browser. See Iconic maths ideas in Concrete mathematics in particular ones using cubes. Gel: Nonfluid colloidal network or polymer network that is expanded throughout its whole volume by a fluid. A gel is thus a Porous solid with colloidal size pores, and filled with liquid. See also http://www.madsci.org/posts/archives/2001-03/984500675.Ch.r.html It is a substantially dilute cross-linked system, which exhibits no flow when in the steady-state. By weight, gels are mostly liquid, yet they behave like solids due to a three-dimensional cross-linked network within the liquid. Note 1: A gel has a finite, usually rather small, yield stress. Note 2: A gel can contain: (i) a covalent polymer network, e.g., a network formed by crosslinking polymer chains or by nonlinear polymerization; (ii) a polymer network formed through the physical aggregation of polymer chains, caused by hydrogen bonds, crystallization, helix formation, complexation, etc., that results in regions of local order acting as the network junction points. The resulting swollen network may be termed a “thermoreversible gel” if the regions of local order are thermally reversible; (iii) a polymer network formed through glassy junction points, e.g., one based on block copolymers. If the junction points are thermally reversible glassy domains, the resulting swollen network may also be termed a thermoreversible gel;
(iv) lamellar structures including mesophases {Ref.[4] defines lamellar crystal and mesophase}, e.g., soap gels, phospholipids, and clays; (v) particulate disordered structures, e.g., a flocculent precipitate usually consisting of particles with large geometrical anisotropy, such as in V2O5 gels and globular or fibrillar protein gels.
Note 3: Corrected from ref.,[5] where the definition is via the property identified in Note 1 (above) rather than of the structural characteristics that describe a gel.[6] Hydrogel: Gel in which the swelling agent is water. Note 1: The network component of a hydrogel is usually a polymer network. Note 2: A hydrogel in which the network component is a colloidal network may be referred to as an aquagel. Note 3: Definition quoted from refs.[6][7][8] Epigenetics.. I.e. designability They show GP map bias. Highly designable phenotypes and mutational buffers emerge from a systematic mapping between network topology and dynamic output certain dynamical phenotypes can be generated by an atypically broad spectrum of network topologies. Such dynamical outputs are highly designable, much like certain protein structures can be designed by an unusually broad spectrum of sequences. The network topologies that encode a highly designable dynamical phenotype possess two classes of connections: Evolvability and robustness in a complex signalling circuit The number of genotypes with a given phenotype varies very widely among these phenotypes. Some phenotypes have few associated genotypes. Others have many genotypes that form genotype networks extending far through genotype space. A minority of phenotypes accounts for the vast majority of genotypes. Importantly, we find that these phenotypes tend to have large genotype networks, greater robustness and a greater ability to produce novel phenotypes. Thus, over a broad range of phenotypic robustness, robustness facilitates phenotypic variability in our study system. The effect of scale-free topology on the robustness and evolvability of genetic regulatory networks We find that SF networks generate oscillations much more easily than ER networks do, and this may explain why SF networks are more evolvable than ER networks are for oscillatory phenotypes. http://blog.stephenwolfram.com/2016/02/black-hole-tech/ Gravity waves observed! : Observation of Gravitational Waves from a Binary Black Hole Merger Notes from David Wallace's talk Standard approach at theories: Start with manifold and geometric objects There are some absolute objects, and dynamical objects. Spacetime symmetry group leaves absolute objects invariant. In GR, no absolute objects, so full diffeomorfism groups. Alternative: G-structured space Kleinian geometry: substractive construction. vs Riemann geometry: additive construction Check video of David wallace seminar 11/feb/2016 Generalized function, also called distribution. http://www.damtp.cam.ac.uk/user/dbs26/1BMethods/Distributions.pdf The are found as limiting cases of functions, where the limit itself is not a function, in the mathematical sense. However, they can be useful "They’re designed to fulfill an apparently mutually contradictory pair of requirements: they are sufficiently well-behaved that they are infinitely differentiable and thus have a chance to satisfy partial differential equations, yet at the same time they can be arbitrarily singular – neither smooth, nor differentiable, nor continuous, nor even finite – if interpreted naively as 'ordinary functions'. One defines distributions as linear maps from the space of test functions (smooth functions with compact support) to the Real numbers. One can add distributions, multiply distributions by smooth functions, but in general there is no way to multiply two distributions together. The most important example of a distribution that isn't just a function is the Dirac delta Gene editing with CRISPR/Cas9 Cas9 refers to a protein that has been found in bacteria inmune systems that is able to cut a DNA double strand at a point which matches the sequence of an RNA chimera (i.e. a molecule made of several RNA parts). This allows the programmable cutting of DNA. It is a particular type of a restriction enzime, which are enzymes which cut DNA at certain sites. This is important for genetic engineering because it is known that when you cut DNA, one way DNA repairs is by rejoining the two ends of the cut by introducing a new piece of DNA. Paper that announced discovery Personal genome project, for "donating" your genome for research. Cambrian Genomics DNA laser printing! Gene therapy to save the world by Liz Parrish, CEO of BioViva. See Anti-ageing innovation. A gene is a particular portion of DNA in a chromosome that codes for a protein belonging to a certain family (that may then have some function in an organism or a cell). Every gene is identified with a particular protein, and viceversa (in standard biology). A chromosome is a single molecule of DNA, containing many genes; an organisms often has several chromosomes. In a chromosome, the DNA is wound on histone proteins, and very densely packed, so that it can fit inside the nucleus. The packing structure is illustrated here. A locus (see wiki) is the physical part (the location) along the DNA sequence of a chromosome, that a particular gene is found in. An allele is a version of a gene coding for a specific protein. The genotype is the sequence of all alleles of an individual. Genes, Alleles and Loci on Chromosomes https://en.wikipedia.org/wiki/Zygosity Mendel's laws 1. Law of segregation 2. Independent assortment Punnet square https://en.wikipedia.org/wiki/Chromosomal_crossover Map between a coding space (genotype), and another space, called the phenotype. These appear, for instance, in Evolution. See MMathPhys oral presentation Genotype–phenotype mapping and the end of the ‘genes as blueprint’ metaphor Developmental encoding or indirect encoding: you encode the instructions to build the system (by Morphogenesis), instead of the system itself (direct encoding). See Neuroevolution: Direct and Indirect Encoding of Networks. Comparing direct and developmental encoding schemes in artificial evolution Genotype-phenotype maps - Stadler Ideas extending standard topology to explore the spaces defined by GPMs Evolving scalable and modular adaptive networks with Developmental Symbolic Encoding Ideas of evolvable GPMs, evolving evolvability, etc. Effects Related concepts A geological time span corresponding to tens to ~one hundred million years. See the timeline of the History of Earth Things often have a shape What is space? Well, it can be Euclidean, but it may also be non-Euclidean, and have curvature! https://en.wikipedia.org/wiki/Geometry New Horizons in Geometry (Dolciani Mathematical Expositions) 1st Edition See part of the book here: http://www.mamikon.com/VisualCalc.pdf Good discussion on Reddit: https://www.reddit.com/r/Ghost_in_the_Shell/comments/2dsuzs/opening_screen_for_sac/ Approximation algorithms for grammar-based data compression smallest grammar problem: find the smallest context-free grammar that generates exactly one given string. A granular material (also known as granular media or granular matter) is a dense packing of non-cohesive solid particles. See Soft matter physics and Complex systems Force Distributions in Dense Two-Dimensional Granular Systems
Despite the highly uniform density of a random packing of non-cohesive particles, photoelastic visualizations provide a striking evidence of the heterogeneous distribution of contact forces on a scale definitely larger than the typical particle size Bimodal Character of Stress Transmission in Granular Packings Shear-Jamming in Two-Dimensional Granular Materials with Power-Law Grain-Size Distribution The role of particle shape on the stress distribution in a sandpile Scattering of waves by impurities in precompressed granular chains Fragmentation process See Graph theory A graph , consists of a set of vertices , and a set of edges . See Graph theory A (graph) automorphism is an isomorphism from a graph to itself, i.e., where . Automorphisms capture the notion of symmetry for a graph because imposing the above condition of edge preserving is that same than imposing that if we move vertices in a geometrical representation of a graph from their positions to the positions previously occupied by other nodes while carrying their connections with them (because if connections exist between and , they must exist between and , so that it is a homomorphism, i.e. a structure preserving map), then the new connections will be the same as those of the original graph (because the homomorphism property implies they are a subset of the connections. However, for it to be an isomorphism, the inverse map must also be a homomorphism, so that a connection must correspond to connections , so that it is also a superset, and so the sets of edges are equal). ->Another way of looking at a graph automorphism is as a permutation of the node labels , such that a pair of vertices are connected if and only if are connected. ->Yet another way of looking at graph automorphisms is, I think, as symmetries of the Adjacency matrix. Any permutation of the node labels that leaves the adjacency matrix unchanged is a graph automorphism. The set of all automorphisms of an object forms a group, called the automorphism group. Intuitively, the size of the auto- morphism group A ( g ) provides a direct measure of the abundance of symmetries in a graph or network. Every graph has a trivial symmetry (the identity) that maps each vertex to itself.
https://www.wikiwand.com/en/Graph_dynamical_system A particular kind is a Boolean network, if the state of each node is binary. See also Sequential dynamical system See Graph theory A (graph) isomorphism is a mapping between vertices of two graphs and ( such that and ) such that the edge is contained in the set of edges of , if and only if the edge is contained in the set of edges of . To graphs are isomorphic if there exists an isomorphism between them. They are then also called "topologically equivalent". We can describe difussion of a quantity associated with node in a network with adjacency matrix , with the equation: where is the diffusion constant. In vector form: where is the diagonal matrix of degrees, and is the (combinatorial) graph laplacian, which is then: We can solve this diffusion equation by writting any initial condition as a linear combination of eigenvectors of , and the coefficients will then evolve exponentiall with exponents given by the eigenvalues of the matrix. The graph laplacian can be related to the edge incidence matrix, . This is defined by first labelling the ends of each edge as and . Then: Then, , from which one can show that the eigenvalues of are not only real (as it is symmetric), but also non-negative. This is an important physical property of the Laplacian, because it means the solutions of the diffusion equation only includes non-diverging solutions, which makes sense since diffusion is constructed to conserve the quantity . In particular the vector always has eigenvalue (this implies is singular). It can be shown, that more generally, the number of eigenvectors with eigenvalue is always equal to the number of components in the network. Thus the second eigenvalue of the Laplacian (when arranged in ascending order) is non-zero if and only if the network is connected. This eigenvalue is called the algebraic connectivity or spectral gap, and is useful in a technique known as spectral partitioning. Graphics and visualization libraries for Frontend web development ThreeNodes.js: vvvv "clone" in javascript/webgl http://threejs.org/
http://www.sitepoint.com/twelve-javascript-libraries-data-visualization/ Natural extension of the meet of two elements to an arbitrary Set of elements of a poset Interpreting the Partial ordering as "less than or equal", it can be understood as the greatest point that is less than or equal to all the points in the set. See Measures and metrics for networks Many networks naturally divide into groups. These are substructures that are prominent for some reason. Simple examples are: Many other definitions related to the idea of "groups" Generalization of components: k-component is a maximal subset of nodes such that each is reachable from each of the other by at least vertex-independent paths. Equivalently no vertices in this set can be disconnected by removing less than vertices see cut sets. A variant can be defined using edge-independent paths. GPU computing CUDA optimization: https://github.com/akrizhevsky/cuda-convnet2 Nice Nvidia hardware for deep learning: https://developer.nvidia.com/devbox NVIDIA GPUs - The Engine of Deep Learning https://www.reddit.com/r/buildapcforme/comments/3vrokm/high_end_pc_for_deep_learning_up_to_3500/ A Topological space is Hausdorff, if for any pair of points there exists open sets and such that , and . "Cells that fire together, wire together." However, this summary should not be taken literally. Hebb emphasized that cell A needs to "take part in firing" cell B, and such causality can only occur if cell A fires just before, not at the same time as, cell B. https://en.wikipedia.org/wiki/Hebbian_theory in Neuroscience Hebb's rule, Hebb's postulate, and cell assembly theory. Hebb states it as follows: Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability.… When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased. A fires just before, not at the same time as, cell B. This important aspect of causation in Hebb's work foreshadowed what is now known about spike-timing-dependent plasticity, which requires temporal precedence.[3] The theory attempts to explain associative or Hebbian learning, in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells, and provides a biological basis for errorless learning methods for education and memory rehabilitation. In the study of neural networks in cognitive function, it is often regarded as the neuronal basis of unsupervised learning. Cell Assembly Signatures Defined by Short-Term Synaptic Plasticity in Cortical Networks The cell assembly (CA) hypothesis has been used as a conceptual framework to explain how groups of neurons form memories. CAs are defined as neuronal pools with synchronous, recurrent and sequential activity patterns
A Markov process, often a Markov chain, that, through a mapping, produces an output that models some Stochastic process. A Hidden Markov Model (HMM) is a discrete-time finite-state homogenous Markov chain observed through a discrete-time memoryless invariant channel. This is used, for instance, in Machine learning Le Prince, first ... Edison, Lumiere Real film continuity, involving action moving from one sequence into another, is attributed to British film pioneer Robert W. Paul's Come Along, Do!, made in 1898 and one of the first films to feature more than one shot. In 1900, continuity of action across successive shots was definitively established by George Albert Smith and James Williamson, who also worked in Brighton. In that year Smith made As Seen Through a Telescope, in which the main shot shows street scene with a young man tying the shoelace and then caressing the foot of his girlfriend, while an old man observes this through a telescope. There is then a cut to close shot of the hands on the girl's foot shown inside a black circular mask, and then a cut back to the continuation of the original scene. Even more remarkable is James Williamson's Attack on a China Mission Station (1900). The first shot shows Chinese Boxer rebels at the gate; it then cuts to the missionary family in the garden, where a fight ensues. The wife signals to British sailors from the balcony, who come and rescue them. The film also used the first "reverse angle" cut in film history. George Albert Smith (film pioneer) Science fiction and special effects. Georges Méliès Divided in Geological periods Phaneros -> phenomenon; zoic -> animals. Animal phenomena Old animanls Middle animals keno -> new (from greek). New animals https://en.wikipedia.org/wiki/History_of_genetics https://en.wikipedia.org/wiki/Imre_Festetics https://en.wikipedia.org/wiki/Hugo_de_Vries https://en.wikipedia.org/wiki/Survival_of_the_fittest https://en.wikipedia.org/wiki/History_of_evolutionary_thought Older ideas influenced by the Great chain of being A description of the Cosmos, from the physical, cosmic, non-anthropocentric, perspective. Its State, the Information it holds, i.e., what is actually found and observed in it, both in the vastness of space and the immensity of time. See Evolution Homplasy is the appearance of similar traits in organisms when their most common recent ancestor didn't have them. See Wiki article The causes of homoplasy
are sometimes elaborated in the context of the difference between: two organisms share a common genetic heritage, and genetic means, and where the primary causal force is usually
attributed to selection See Convergence, adaptation, and constraint. This binary distinction may be too
simplistic (see [36–39] for some recent discussion). For the GP map bias in the Arrival of the frequent, the reason for this repetition is not a contingent
common genetic history, nor the Allmacht (german for omnipotence) of selection [40], but
rather a different kind of ‘deep structure in biology’ [41]. See Measures and metrics for networks One can distinguish two types of important nodes in directed networks. We describe them for the case of an information network, like WWW first: This idea was implemented by Kleinberg into the hyperlink-induced topic search or HITS algorithm. The mathematical definitions that tries to capture the above intuition are: Mathematically, where and are the authority and hub centralities, respectively. These equations combine to show that these centralities are in fact the eigenvectors of and , respectively, with the same eigenvalue (which must be the leading one, suing similar arguments as cases above, and which is equal to . (or , but not both) is a free parameter that can be chosen to be as we don't care about relative centralities. This connection means that these centralities are similar to the eigenvector centralities for the cocitation and bibliographic coupling network, respectively (see Mathematics of networks). A code used in Data compression that is optimal, in the sense that it achieves the entropy limit (within less than one bit). https://en.wikipedia.org/wiki/Huffman_coding https://www.cs.cf.ac.uk/Dave/Multimedia/node210.html In the animation below, the blue nodes are in the OPEN list., at every iteration we choose the nodes with the two lowest frequencies within the blue nodes (with preference with those not yet in the tree, if equal frequency). http://www.open.edu/openlearn/science-maths-technology/science/biology/hearing/content-section-3.3 http://vaczy.dk/htm/acoustics.htm http://www.newmusicbox.org/articles/The-Musical-Ear/ Actually, which sounds sound nice together is apparently a far more complex question than rhythms (not an expert here just curious). The main explanation I can find (given that there are many things yet unknown, such as the roles spatial, temporal, and neural encoding play) is mentioned here: http://www.newmusicbox.org/articles/The-Musical-Ear/ The basilar membrane is known to certainly play a role in pitch perception. Now, most times we hear a frequency we hear it from some object (like an instrument) that generates harmonics of that frequency (ultimately due to ratios of lengths and linear dispersion relations). Now, harmonic frequencies (with simple ratios as you say) share a lot of harmonics themselves. These will excite the basilar membrane in the same spots. And as long as the harmonics don't differ by more than about 10Hz, they will be indistinguishable (as far as the basilar membrane is concerned, due to bandwidth). However, if you make to non harmonic sounds with two non-conmensurate objects, a lot of their harmonics will be very close, and within the so called critical frequency, which has been shown to cause dissonant perception. Now, a plausible theory for why even pure sinusodial waves at simple ratios tend to sound better (though I did the test now, two non-harmonic sin waves don't sound nearly as bad as two non-harmonic piano notes), may be that the brain develops neuronal networks to prefer these sounds. Your theory of the brain detecting the rhythms is still interesting though, and may be relevant to the "temporal coding" theoreis that have been proposed, but I have not read much about those.. http://plasticity.szynalski.com/tone-generator.htm The Neural Code of Pitch and Harmony https://en.wikipedia.org/wiki/Basilar_membrane https://en.wikipedia.org/wiki/Pitch_%28music%29#Theories_of_pitch_perception https://en.wikipedia.org/wiki/Consonance_and_dissonance#Physiological_basis_of_dissonance https://en.wikipedia.org/wiki/Music_psychology#Neural_correlates_of_musical_training https://en.wikipedia.org/wiki/Psychoacoustics#Music Music and measure theory The reason it works so well to have twelve notes in the chromatic scale is that powers of the twelveth root of two tend to be within a 1% margin of error of simple rational numbers. And it's good to have powers of the same factor for the notes, because the brain perceives separation between frequencies logarithmically not linearly. Human positions refer to the different physical configurations that the human body can take. Land's Demonstration - in Chapter 04- Senses - from Psychology- An Introduction by Russ Dewey A theory of the Benham Top based on center-surround interactions in the parvocellular pathway Land effect: Red and White Demonstration Changes in pattern induced flicker colors are mediated by the blue-yellow opponent process Contribution of local and global cone-contrasts to color appearance: a Retinex-like model The knowledge regarding all the natural aspects and artificial constructs related to Humanity, the collective of the Human species, evolved in Planet Earth. We include here what is normally known as humanities, but also Social sciences that treat aspects of Humanity (so for examples scientific studies of animal societies are not part of humanities, although part of social sciences) Although humans are the origin of our currently known complex social systems, transhuman advancements (like the development of AI, Mind uploading, or Genetic engineering), or the discovery of Extraterrestrial life make may future non-human agents as, or even more important, in society. Cosmos will need to then be upgraded with a new term more encompassing than "Humanities". Society is one candidate for such a general term, and indeed Social sciences have gone beyond standard humanities in studying social aspects of non-humans. In any case, our current social systems are still mostly human-centered, and the centrality of this tiddler represents that state of affairs. Note that even as animal right movements are succeeding in giving animals fundamental rights of living and sentient beings (as the right to be protected from suffering), animals will probably still play a secondary role in society, as humans are generally more complex and intelligent in their behaviour. The study of Humans per se (whether individually, or collectively in societies) is called Anthropology. Humans are part of the Tree of life, and their natural aspects are thus studied in Biology, in particular in biological anthropology. On the other hand, the collective of artificial constructs created by Humanity is known as Culture (studied by cultural anthropology). Humans organize themselves in societies. The organization often involves systems of Law (~what can* be done), Politics (~what should we do), and Economics (~how to do we get what we need). * what can be done, here of course refers to not just what can be done by physical laws, but what is permitted by Law, the societal construct that dictates what humans are allowed or not to do, by society Physical aspects of these societies are mostly studied in Geography, particularly in Human geography. Human communication is a crucial aspect of the human condition, and of the resulting societies. It is studied in Linguistics (and in particular, for humans, in linguistic anthropology) https://en.wikipedia.org/wiki/Hund%27s_rules http://hyperphysics.phy-astr.gsu.edu/hbase/atomic/hund.html The first two rule are mostly caused by Coulomb interaction. The third is caused by spin-orbit coupling. A fixed point is called hyperbolic if none of the eigenvalues of the Jacobian evaluated at the point have zero real part. See Simplicity bias in finite state transducers On the second question, there is actually a stumbling block due to the random FST ensemble I'm using, which consists only of accessible FSTs (of given size). Accessible means that any state can be reached from the initial state (so that there are no 'useless' states). This is in contrast to random unrestricted FSTs, where each of the K_i n out-stubs are connected to a state, independently and uniformly at random.. Answering probabilistic questions for the latter is much easier than for accessible FSTs (see attached or http://bit.ly/290fHji). I guess we could simulate random unrestricted FSTs, though I think accessible FSTs are a more interesting ensemble, because you fix the actual number of states in the automaton. Anyway, there may still be some things to say here, because in the article I attach he finds a way of relating statistics of automata to those of accessible automata, but only asymptotically, and with inequalities. There may be other approaches with Analytic combinatorics, but they are potentially quite hard. Regarding the first question, I've been refining my ideas about loops of 'noncoding states' (with output symbols being equal). In particular, looking at the experimental results, I've noticed that bias is associated with 'absorbing regions' that contain at least one non-coding state (approximately absorbing regions also show some bias). An absorbing region is a set of states which you can reach, but which you can't leave. Now, I've found two main factors determining the frequency/neutral-set-size/designability (call this NSS) of an output of an FST that contains this: Now, I've also found that the NSS is multiplied by 2^(a*m), where a depends on the structure of the absorbing region (in an interesting combinatorial way). So the NSS \propto 2^(a*m). The proportionality constant will depend on the particular string, and the number of noncoding states it passes through, outside the absorbing region (this requires more attention). Now, if the m output bits from the absorbing region are composed of a repeating pattern (often the case, but I can think of exceptions..), the Lempel-Ziv complexity C <= n-m + const., where n is the total number of bits, and const is the number of bits in the repeating pattern. Under these assumptions, one can see that the frequency of an output obeys P =NSS/2^n <= 2^(-a*C + b), where I lump all the proportionality constants above in b.. The Fibonacci GP map described in the paper on constrained/unconstrained parts, is actually an example of the simplest kind of FST with the properties above. It can be implemented as a 3-state FST, with an absorbing region consisting of a single non-coding state, and no non-coding states anywhere else. Thus the arguments above work very cleanly. Unfortunately, general FSTs can show more complicated things, like: All these make complicate the picture, and should be taken into account more fully to improve the argument above. In any case, it makes sense the argument above can't be exact (except for simple cases like Fibonacci GP map), because most FSTs show a complexity/frequency plot which is not perfect, but has some noise. Hope that wasn't too long. I think also that all this will be easier to understand with pictures... The branch of medicine and biology concerned with immunity, that is, the ability of an organism to resist a particular infection or toxin by the action of specific antibodies or sensitized white blood cells. A Data type corresponding to a value that can't be changed. Immutable types: understand their benefits and use them Usable for Concurrent computing See here: Chapter 2 Information Measures - Section 2.1 A Independence and Markov Chains https://en.wikipedia.org/wiki/Conditional_independence See here. Note that his definition is the same as in wiki. Just divide by to see this. His example at the end is rather illustrative too. Number of independent paths between two vertices (the connectivity) gives measure of how strongly connected they are. Paths can be vertex-independent if they share no vertex (other than starting or ending vertices), or edge-independent if they share no edge. A vertex (edges) cut set is a set of vertices (edges) that if removed will disconnect a specified pair of vertices. A minimum cut set is the smallest such set for the vertices. For weighted networks a minimum cut set is the set of such vertices that have the least total weight. Menger's theorem: If there is no cut set of size less than , then there are at least independent paths. This actually implies that size of the minimum cut set () equals the connectivity of two vertices (): . . . In particular . However if , we need to cut at least vertices/edges, so . . The maximum flow if the network were made of water pipes between two vertices is the number of edge-independent paths times the maximum flow of a single pipe can sustain, or pipe capacity, . Let be size of minimum edge cut set. Clearly, is a lower bound for this max flow, since each independent path will independently carry max flow .. Also, if we remove an edge that forms part of a path between them, we must decrease the flow by at most . Thus, if we remove the edges from the minimum cut set, we decrease the flow by at most , but this must remove all flow. Hence total capacity is at most , which is then an upper bound. is both an upper and lower bound, and hence the maximum flow must equal . This is the max-flow/min-cut theorem, for special case of the same capacity for all pipes. The max-flow/min-cut theorem can be generalized to weighted networks. This can be shown by transforming the weighted network into a multigraph. This result is useful because some computer algorithms (maximum flow algorithms) can compute maximum flow easily. But, by result above, they also calculate the minimum cut set size, and the connectivity, which can be used to find clusters in networks. This is in fact the current standard numerical method for connectivities and cut sets. The max-flow/min-cut theorem has been used in a polynomial-time algorithm for finding ground states of the thermal random-field Ising model. See reference [257] in Newman's book. Industrial engineering is a branch of engineering which deals with the optimization of complex processes or systems. Industrial engineers work to eliminate waste of time, money, materials, man-hours, machine time, energy and other resources that do not generate value. According to the Institute of Industrial and Systems Engineers, they figure out how to do things better, they engineer processes and systems that improve quality and productivity Industry is the production of goods or related services within an economy, by processing raw materials. This process is more generally called Manufacturing. Industry is therefore, manufacturing in the context of an economy Influence maximization/optimization in complex networks through optimal percolation Keywords: Social dynamics, Networks, Percolation Influence maximization in complex networks through optimal percolation I think this article will be of interest to people investigating social or other networks over which something is transmitted over the edges (whether these are infections, messages, opinions...). These arise in many problems in science and engineering, especially those involving complex social networks. In these networks one can often assign importance to nodes by seeing how much does their removal disrupt the potential spread of the unit being transmitted across the network. In particular, the optimal influence problem tries to maximize the influence on the network by affecting the least number of nodes. This article presents a novel algorithm that can find very good approximate solutions to this problem, which is generally NP-hard. They do this by first expressing the problem in terms of a percolation process, so that maximum influence corresponds to making the giant connected component disappear with the least number of nodes removed. Although for small networks this can be tackled using methods from statistical mechanics, an adaptive algorithm is more effective for large networks. They demonstrate its effectiveness, as well as its superiority against other heuristic algorithms, in both synthetic and real networks. Although I think the article does a good job at summarizing the results in the 4 pages of the letter, I think some more explanation on the connection of the optimal influence problem and their mathematical formulation would be useful to aid the reader's understanding (leaving the SI only for non-crucial details). For instance, I think that it should be mentioned that the stability of the solution, under locally tree like assumption, is what determines whether the GCC is present or not, for large networks. Optimal influence chooses the minimum number of nodes that make the solution. Similarly, I think the vector is introduced without explaining what it represents (a perturbation to the order parameter vector . 'Smaller is smarter' in superspreading of influence in social network Shannon's Information Measures Continuity of Shannon's Information Measures Some Useful Information Inequalities Three approaches to the quantitative definition of information See Data transmission. An information source is often modelled as a discrete-time stochastic process, so it is a sequence of Random variables, taking values in a set called the source alphabet. An stationary information source is one corresponding to an stationary stochastic process, so that any {finite block of random variables} and {any of its time-shift versions} have exactly the same joint distribution An important property of an information source is its Entropy rate Information theory Information Theory, Information Theory (CUHK) A code is a representation of information/data. Coding theory (and/or coding methods) is the study of the properties of codes and their fitness for a specific application. These applications include Data transmission, Data compression, Cryptography, and Network information theory See Source-channel separation theorem The main problem of study in data transmission theory is: for a particular Communication channel, find code so that data transmission rate is as high as possible, while receiver receives the information with negligible probability of error. The limit in data transmission rate turns out to be the Channel capacity, as established by the Channel coding theorem. Data transmission is part of the broader area of study called Communication theory, which includes consideration of the information source and destination. Study of theoretical limits and implementation of codes that make average length of the value of a random variable as short as possible, whether in a lossless, or lossy way. The limit in the average length of codewords in a lossless code turns out to be the entropy, as established by the Source coding theorem Limits in lossy codes are established in Rate compression theory Kolmogorov complexity. Shortest program that will produced desired output in Turing machine. Occam's razor Shannon - A Mathematical Theory of Communication General theory of information transfer: Updated Storing and Transmitting Data: Rudolf Ahlswede’s Lectures on Information ... Information Theory, Combinatorics, and Search Theory An injective function, also called one-to-one, is a function such that . Integrating Symbols into Deep Learning Abstract of talk: Computer Science is the symbolic science of programming, incorporating techniques for representing and reasoning about the semantics, correctness and synthesis of computer programs. Recent techniques involving the learning of deep neural networks has challenged the "human programmer" model of Computer Science by showing that bottom-up approaches to program synthesis from sensory data can achieve impressive results ranging from visual scene analysis, expert level play in Atari games and world-class play in complex board games such as Go. Alongside the successes of Deep Learning increasing concerns are being voiced in the public domain concerning the deployment of fully automated systems with unexpected and undesirable behaviours. In this presentation we will discuss the state-of-the-art and future challenges of Machine Learning technologies which promise the transparency of symbolic Computer Science with the power and reach of sub-symbolic Deep Learning. We will discuss both weak and strong integration models for symbolic and sub-symbolic Machine Learning alongside ongoing work on applications in this area. Integratint symbols in deep learning talk notes: Recently seen in PRE: Fractional telegrapher's equation from fractional persistent random walks Nondeterministic self-assembly of two tile types on a lattice Synopsis: Trees Crumbling in the Wind Non-Hermitian localization in biological networks Physical origin of nonequilibrium fluctuation-induced forces in fluids Pattern formation in flocking models: A hydrodynamic description Random walk with random resetting to the maximum position Flocking with discrete symmetry: The two-dimensional active Ising model Spatial distribution of thermal energy in equilibrium Signatures of infinity: Nonergodicity and resource scaling in prediction, complexity, and learning See Phoretic mechanisms of colloids. Colloid Transport by Interfacial Forces Interfaces: in fluid mechanics and across disciplines Excluded volume van der Waals Hydrophobic Electrostatic Principles and applications of nanofluidic transport http://sci-hub.cc/10.1017/S0022112010004404 Diffusio-osmosis, Thermo-osmosis, etc. Debye-Huckel double layer See book by Hunter - Foundations of colloid science Intermolecular forces are usually composed of a repulsive and an attractive part: If the potential indeed has an attractive component (the repulsive one always exists), then the potential will present a minimum, corresponding to an equilibrium state, known as a bond. The depth of the potential minimum (relative to the thermal energy, ) determines the strength, or stiffness of the bond. One often makes a distinction between: Common intermolecular forces HTTP vs IPFS: is Peer-to-Peer Sharing the Future of the Web? Find websites with certain TLD (top level domain): https://domainpunch.com/tlds/topm.php A fixed point (and other structures too, if suitably generalized) of a dynamical system has three kinds of invariant manifolds: See Chapter 3 in Wiggins book. ChaosBook.org chapter Stretch, fold, prune - Stable, unstable manifolds ChaosBook.org chapter Stretch, fold, prune- Plotting an unstable manifold A Membrane protein that regulates the flow of ions. This is a kind of active Cell transport. A permeation theory for single-file ion channels: Concerted-association’dissociation A permeation theory for single-file ion channels: One- and two-step models Lattice with spins interacting with nearest neighbours to favour either alignment and anti-alignment, as a minimal model of a ferromagnet. It has many connections with other systems in Statistical physics, and Complex systems, due to the abstract nature of the model. 1D Ising model was solved by Ising and others. A major breakthrough in statistical physics was the exact solution of the Ising model in two dimensions [107]. Onsager
gave in 1944 a complete solution of the problem in zero external magnetic field. But in three dimensions, Istrail has
shown [108] that essentially all versions of the Ising model are computationally intractable across lattices and thus the
3D Ising model, in its full, is NP-complete. For another model with many interesting connections, see Spin glass A particular kind of Self-propelled particle with a kind of asymmetry corresponding to half of the particle having one property, and the other a different one. Most often a Janus swimmer refers to spherical colloids, where one hemisphere is coated with some material, and the other with a different one (or just exposing the material of the colloid itself). A particular kind is the Catalytic conductor-insulator Janus swimmer The language of the web A Programming language, often used for Frontend web development. https://github.com/dominictarr/hyperscript Graphics and visualization web libraries 5 JAVASCRIPT LIBRARIES FOR JULY 2016 Reactive programming: http://reactivex.io/ OOP on JS: https://github.com/jneen/pjs http://requirejs.org/docs/commonjs.html http://www.typescriptlang.org/Tutorial https://material.angularjs.org/latest/ https://www.polymer-project.org/1.0/ Animation and graphics:
https://www.khanacademy.org/computing/computer-programming
https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/Tutorial/Getting_started_with_WebGL Data structures: http://jnuno.com/tree-model-js/ Functional programming: Functional programming in Javascript Functional progr JS library: http://ramdajs.com/0.19.1/index.html. https://lodash.com/. This looks awesome: http://elm-lang.org/ Other tools: https://babeljs.io/docs/setup/ To compile ECMAscript 2015 to normal compatible JS !! Meteor has ES6 package already Testing: https://github.com/sindresorhus/ava for concurrent testing Other JS-related languages TypeScript http://www.typescriptlang.org/ Coffeescript A join, is an operation defined on elements of a poset (not necessarily all of them) defined as: The join (or Least upper bound) of is an element such that: Note that, if it exists, a join is necessarily unique. See also Lattice (algebraic structure) In Information theory, the joint entropy of a pair of Random variable and is defined as: Animation: http://laughinghan.github.io/radiance/ Awesome: timeline-based web animations with gui: https://spiritjs.io/ Also for animation: http://anime-js.com/ Jurisprudence is the science, study and theory of law. https://en.wikipedia.org/wiki/Jurisprudence See Law One can use the Jury test to find if the roots of a polynomial are inside the unit circle, which is useful for stability analysis of Nonlinear maps. This test turns out to be useful in stability analysis of discrete time systems in control theory. Jury test Given a quadratic equation of the form: Both eigenvalues fall within the unit circle iff these three conditions hold: The way to show this is to divide the problem in two cases (given are real): An Explosive percolation process that is based on chosing vertices at random and adding edges among those vertices according to some rule. -vertex rules are actually a generalization of -edge rules (aka Achlioptas process) because a -edge rule can be constructed from a -vertex rule, where , which chooses vertices at random (possibly repeating, but still being able to have distinct edges), and then choose edges at random within these vertices. Note that we need so that we don't restrict the chosen edges to have some vertex in common. -vertex rule (as defined here): In processes following an -vertex rule, the agent is presented with the random list (set) of vertices, and, unless two or more are already in the same component, must add one or more edges between them, according to any deterministic or random rule that depends only on the history. Some -vertex rules are examples of Non-self-averaging percolation process, showing novel supercritical phenomena, like stochastic staircases! In Achlioptas process phase transitions are continuous, it was shown that the Percolation phase transition for processes following a vertex rule was continuous. However, they can still show some discontinuity arbitrarily close to the critical point (see Non-self-averaging percolation process). See Measures and metrics for networks Katz centrality solves the problem posed above by giving all vertices a "free" centrality: ....Eq. 2 or rearranging and setting , because all we care is about relative centralities: This is the Katz centrality. Often one computes this not by inverting the matrix (which requires computations), but by iterating using Eq. 2 (which requires just multiplications per step (number of nonzero elements of , often less steps overall). A useful extension is to take , i.e. give each node possibly a different weight maybe expressing some non-network importance By Taylor expanding it we can see it is like Eigenvector centrality, but taking into account paths of all lengths, but with with a weight. Regression using certain basis function (i.e. find coefficients for a certain linear combination of these that fits the training data). Standard ones are polynomials (see Weirestrass approx theorem, but possible terms become very large as we increase degree). Can also use Gaussians, or radial basis functions (RBFs). Once kernel functions are used, then can use same methods as for linear regression. Basically, we replace each input datum with the kernel functions evaluated at the input datum. https://en.wikipedia.org/wiki/Kinase A kinase is an Enzyme that catalyzes the transfer of phosphate groups from high-energy, phosphate-donating molecules to specific substrates (from my email) In case anyone cares, I think I worked out the irreversibility thing (it's been good revision trying to figure it out, so may be good revision for you too:P): 1. All flow is time-reversible in the theoretical sense that if you reverse all particle's trajectories, you get another physical flow. This is not in general true, though, if you include the viscosity term (although it can be), as this is a term that is there to account for the degrees of freedom we aren't accouning for, so it is in general not time-reversible, just like friction isn't (balls don't just start spountaneously from rest and cooling the floor slightly, this is just the 2nd law). 2. What we usually talk about in fluid mechanics, though, is whether the flow is time-reversible in practice (I think this is called kinematic reversibility). What this means is whether or not I can perform the above theoretical operation of reversing the particle's trajectories, by performing a practically reasonable action. Such a reasonable action is usually changing the boundary conditions of the flow, as that is easy to do. In the case of the die drops, this is what they control, the surfaces of the cylinder
Now, the boundary conditions determine a certain steady-state flow. The important thing about viscous flows (low Re), is that they reach stead state very quickly, so that most of the particles trajectories is spent in steady state, and so most of the trajectory is determined by the boundary condition (b.c.) we control. So we can just turn the cylinder one way, and then the other, and they will have very nearly retraced their steps (I think they turn the cylinder slowly because that keeps Re=UL/nu small).
In higher Reynolds number flows, however, the time the system takes to reach the steady state set by the b.c.s is very significant. Therefore a significant portion of the particles' trajectories is spent in these transient periods. Now, I think the reason these transient periods break (practical) time reversal is because they are not deterimed completely by the b.c.s you control. I think this is because turbulence will most probably set off in them, and as we know, turbulence is random, i.e., out of your control, and thus you can't reverse that (significant) part of the flow. The reason I think turbulence will set off is that when you start moving the cylinder (in the experiment with the die), the no-slip boundary condition will cause a boundary layer, which is very thin due to the low viscosity. This sharp gradient in velocity means high vorticity, which as usual in high Re flows, will spread around, before getting dissipated eventually. This is just the standard onset of turbulence, actually. Another note: I changed my mind on the pressure thing. I think Chris was right that the gradient of the pressure (though not the pressure itself) will change sign, for Stokes flow. This is actually just because what you do when you time-reverse is change the flow, and the flow determines the pressure distribution, so you can calculate what will happen to the pressure. If you do that for Stoke's equation, its gradient must change sign, as the viscous term does. Examples: However, in non-Stokes flow, like say in steady-state inviscid flow, for which Bernouille's theorem holds, the gradient of the pressure doesn't change, as the other terms in NS equation don't either! Example: The mechanism by which phase separation occurs, depends on whether the concentration proportions fall within the spinodal, or outside it, i.e. whether they are the stable or metastable (see Thermodynamics of liquid-liquid unmixing). When it is unstable, the phase separation proceeds inmediatelly and continuously, via a process known as spinodal decomposition. When the mixture is in the metastable region of the phase diagram, then there is a free energy barrier to be overcome, which requires a large concentration fluctuation to form a nucleus, which can then grow. This is known as homogeneous nucleation. However, most often impurities trigger the growth before this happens, and this is known as heterogeneous nucleation. When mixture is in the unstable region any small fluctuation in concentration will tend to be amplified, and this is known as spinodal decomposition. This kind of "uphill diffusion" is because the fundamental quantity that tends to be equilibriated and thus diffuses to remove gradients is the chemical potential (how to derive this from a more macroscopic description, perhaps using Kinetic theory??). The chemical potential is related to the first derivative of the free energy. So if the second derivative is positive (as outside the spinodal region) the higher the concentration the higher the chemical potential, and diffusion acts to reduce concentration gradients. However, inside the spinodal region, the second derivative is negative, and the chemical potential decreases with concentration, and thus diffusion acts to increase concentration gradients. If this was the only mechanism, sharp features will grow the fastest (just as they decay the fastest in normal diffusion). However, there must be something we have neglected. This is because, experimentally, it is found that interfaces have free energy, which isn't included in our free energy (See LectureNotes regarding surface tension). [add fig. 3.7 here] A phenomenologically motivated addition to the free energy to account for this is a term proportional to the square of the gradient in concentration with respect to position. Then one can derive a modified diffusion equation based on: One then obtains a noninear equation, which when linearlized around a gives the Cahn-Hilliard equation. See more here: http://pruffle.mit.edu/~ccarter/3.21/Lecture_22/ A Kleene star, in Mathematical logic and Computer science, (or Kleene operator or Kleene closure) is a unary operation, either on sets of strings or on sets of symbols or characters. If V is a set of symbols or characters then V* is the set of all strings over symbols in V, including the empty string ε. It is often used in Coding theory, Formal language theory, etc. aka algorithmic complexity, although that term may refer to some generalizations of Kolmogorov complexity too, I think One of the main kinds of Descriptional complexity, based on the minimum size of a program (interpreted by a Turing machine) that produces (describes) the object. Kolmogorov complexity is central in Algorithmic information theory. Math 574, Lesson 4-3: Kolmogorov Complexity
other videos This is based on describing the information content of a discrete object such as a binary string in terms of the length of the shortest program that generates on universal Turing machine (UTM). This measure is called the Kolmogorov-Chaitin complexity or simply Kolmogorov complexity of . AIT differs fundamentally from Shannon information theory because the latter is fundamentally a theory about distributions, whereas the former is a theory about the information content of individual objects. Descriptional complexity also differs simplcitly with the notions of complexity in Complex systems. Lecture notes on descriptional complexity and randomness –> Calculating Kolmogorov Complexity from the Output Frequency Distributions of Small Turing Machines See Coding theorem method Deficiencies of KC from here Simple strings (paucity) are rare among all possible strings The frequent paucity of trivial strings A Computable Measure of Algorithmic Probability by Finite Approximations also called metric or measure-theoretical entropy See the related Topological entropy For a Measure-theoretical dynamical system, the metric entropy of the system with respect to a partition is defined to be the Entropy rate of the stochastic process resulting from the partition. The metric entropy (aka (Kolmogorov–Sinai or measure-theoretical) is then the supremum of {the metric entropy with respect to } over all finite partitions . Metric entropy provides
the maximum average information per unit of time obtainable per unit of time, from the dynamical system. See Amigo's book for details. He also gives a good example with the tent map. Note his notation refers to the join of two sigma-algebras. See here or KS test See its application, and explanation in Power-law distributions in empirical data. It is a nonparametric based on Likelihood functions. https://www.wikiwand.com/en/Kolmogorov%E2%80%93Smirnov_test http://www.itl.nist.gov/div898/handbook/eda/section3/eda35g.htm An effect, observed in Dilute magnetic alloy, by which the resistance rises at low temperature. It is named after the Japanese
physicist Jun Kondo, who in 1964 published a calculation that
indicated how the resistance minimum arose Laplace method approximation of the mean first-passage time for a degree of freedom following a Fokker-Planck equation to overcome a barrier. Most easily solved in 1D, where the result is: which is known as Kramers escape time. The exponential has the same form as the phenomenological Arrhenius equation, and the pre-factor is known as the inverse attempt frequency. It is used to estimate reaction rates in Chemical kinetics where one defines a reaction coordinate approximating the evolution of the relevant molecules and their potential energy. Barrier crossover time from probability distribution Can use the (conditional) probability distribution when a barrier is present to calculate the crossover time. This is done by considering the flux (see Fokker-Planck equation). The probability distribution for crossing the barrier is then: where we assume the barrier is at , so that and are at opposite sides. The probability distribution for crossing the barrier above, is the same as the probability distribution for the first passage time. To understand why we use the flux (current, ) to calculate this, imagine many instances of the Brownian particle in the potential. We can approximate the above by just considering frequencies, in the limit of infinite instances. Now, is just calculated by counting the number of times a particle is found within of the point , at time , multiplied by its velocity at that moment. Now, consider a first-passage path... Well idea is that for every second passage passage path, there is a symmetric one with opposite velocity at measurement point (x,t), that thus cancels it in the sum: See NonEq statmech notes. Also: http://www-sop.inria.fr/members/Olivier.Faugeras/MVA/ArticlesALire09/acebron-bonilla-etal-05.pdf http://arxiv.org/pdf/1403.2083v2.pdf https://en.wikipedia.org/wiki/Kuramoto_model Things to note: We transform in most manipulations (including in notes) to a frame that rotates with angular frequency equal to the mean angular frequency of oscillators, . In this frame, the assumption is that the phase of the order parameter, is constant, and so can be chosen to . The fact that it is constant is used to deduce that for the non-phase-locked oscillators (in case of partial coherence) their probability distribution must be constant, so that they are in a state of dynamic equilibrium (because their drift velocity can't be as for the phase-locked states). These differences in behaviour between phase-locked and non-phase-locked oscillators comes from solving their dynamical equations (eq. (9) in this paper, the behavior of which depends on the parameter . Langevin description of Brownian motion with potential Harmonic potential General case The Laniakea Supercluster (Laniakea; also called Local Supercluster or Local SCl) is the Galaxy supercluster that is home to the Milky Way and 100,000 other nearby galaxies. For integrals of the form: as Contributions near global maxima of . Special case, for Three cases: Case 1 The maximum is at (since it is maximum), and we assume it is not , so Case 2 The maximum is at Case 3 The maximum is at some with . Components Networks often have the largest connected component covering most of the network (often more than 50% or 90%). This is sometimes called the "giant component" (however this is sloppy usage, as the term "giant component" means not precisely the same as "largest component" in network theory). In directed networks, we can represent the largest strongly connected component, and its in and out components using a "bow tie" diagram Shortest paths and the small-world effect The small-world effect refers to the finding that the typical distance between nodes in many –perhaps most– networks is surprisingly small. The "typical distance" usually refers to the "mean geodesic distance". Networks that show this property are dubbed small-world networks. The origin of the term comes from a series of experiments by social psychiatrist Stanley Milgram, the so called "small-world" studies, in the 60s. Models of networks often show that this distance scales as , where is the size of the network. This is often given as an upper limit for the growth of the distance with , so that the network is said to have the small-world proprety. The diameter (the largest geodesic distance) is also found to scale similarly. For scale free networks, however, an interesting structure is often found with a core that contains most nodes, and is of lengthscale , making the mean distance scale like that too, but there are a few nodes along "streamers" or "tendrils", around the core, whose lengthscale scales , making the diameter scale like that too. Another interesting effect that is observed, termed funneling, is that often it is found that the geodesic path (path(s) with shortest length) between vertex and passes through only a few particularly well-connected neighbours of for most choices of starting point . Thus if one follows shortest paths to try to reach , one is likely to be "funnelled" through those few or one particular neighbours of . Degree distributions The degree distribution is the fraction of nodes in the network that have degree . The same information can be given in a degree sequence, that is a sequence of the degrees of all the nodes in the network. One can easily see from simple examples, that this information doesn't, however, specify the network structure, in general. For directed networks, we can define the joint in- and out- degree distribution , the probability that a vertex has in-degree and out-degree . This has been currently rarely been measured in practice though. Power laws and scale-free networks Often (though definitely not always), real networks show a power law degree distribution: where is the exponent. Values are typical. These are examples of right-skewed distributions. Typically, the power law is only obeyed for the tail of the distribution, but not for small values of . And typically it is also not obeyed in the high end, for example, due to some cut-off. Networks with power-law degree distributions are sometimes called scale-free networks. Distributions of other centrality measures Distributions of the values for nodes for others centrality measures defined in Measures and metrics for networks. Centralization We can use the distribution of centrality measures to answer the question: "how are the centrality values spread?". High spreads indicate a good centrality measure (or very high centralization, I think), while low spreads indicate a low centrality measure (or descentralization, I think.). A measure for it is: where in the denominator, is the betweeness centrality of node , and is a node that maximizes it, both for the graph that maximizes (a star graph for betweeness for example). The without the tilde is for the actual graph. Dynamical importance (& eigenvalue elasticity) Edge dynamical importance of is: where is the largest eigenvalue of A, and is the change in from removing edge from to (i.e. removing ). The Node dynamical importance of is: where is the change in from removing node (i.e. removing column and row ). One can show that: where the approximation is in only considering the changes in eigenvalue and eigenvector to 1st order. See problem sheet 4 answers for proof. Structural things related (by spectrum, often) to dynamics Dynamics of removing nodes and edges he means? Clustering coefficients (see Transitivity (Graph theory)) Clustering coefficients, are often found to be larger than one would expect if edges where randomly chosen (for a fixed degree distribution, for example, see formula 8.24 in Newman's). This is often the case for social networks. One mechanism that can lead to this is triadic closure (when an open triad is close, say because the common vertex introduces the other two, in case of social nets). This has indeed been found to happen in cases when time-resolved data for network formation is studied. In, the Internet networks, however, is much smaller than the predicted value given by chance (eq 8.24 Newman), suggesting there are forces that shy away from creating triangles. However, different models to compare with (i.e. other random graph models), and other ways of measuring clustering coefficients give different results. Other motifs apart from triangles are also measured sometimes and show interesting patterns (like in neural networks). Local clustering coefficients often show an interesting anti-correlation with degree in real networks. An explanation to this phenomena can be given if the network has a community structure with nodes grouped together in groups of varying sizes. A hierarchical structure has also been proposed to explain this. Assortative mixing Assortative mixing is the tendency of nodes to connect to others that are like them in some way. The formula given in there is not very efficient to compute, because of the double sum going like . There is however a more efficient one that goes like , the number of edges, which is often scales slower with (see eq. 8.27 in Newmann book). Empirically, it is found that most social networks have positive assortativity while most others (technological, biological) have negative assortativity. Part of the explanation for this seems to be that most networks are naturally dissasortative by degree because they are simple graphs (see Mathematics of networks) and so the number of connections between high degree nodes is limited, and so if there aren't many high degree nodes, they will have to connect mostly to lower degree nodes (I think this is gist of explanation). Social networks, on the other hand, seem to overcome this due to their group structure (high clustering coefficient) so that in small groups people of low degree will be mostly connected to people with low degree (i.e. within the small group), and the larger groups will contribute to making people of high degree being mostly connected to people of high degree (i.e. within the large group). Latex is a colloidal dispersion of polymer particles in a liquid. A lattice is an Algebraic structure defined as: A unit element in a lattice is an element such that, for all , . A null element in a lattice is an element such that, for all , . The lattice is complete if a Greatest lower bound and a Least upper bound exist for every subset of (all that is guaranteed by the definition of a lattice is that these bounds will exist for all finite subsets of L). If these exist, they are denoted as , and , respectively. A lattice: Linear layer. Linear function ReLU layer. Rectified linear unit For x=0, may use subderivatives.. Very popular See Machine learning Mathematical theory of learning. Learning problem: Design a system that improves on its ability to perform task T, as measured by performance measure P, by going through experience E. Empirical risk minimization Minimize a cost function, which often is the negative log likelihood (similar to entropy. More precisely, cross-entropy, or relative entropy), which corresponds to maximizing likelihood. Likelihood is the probability of getting the right given and , i.e. the probability that a given model predicts the right outputs. This is equivalent to finding the most likely in the Bayesian posterior, given a flat prior (but if we add a regularizer, we can tweak the prior, by just adding a term to the log likelihood). If our model uses a Gaussian distribution to predict the data (where the s are the means), maximizing likelihood is equivalent to minimizing spring energy for springs vertically placed between fit curve and data. The maximum likelihood is found by Optimization, often by Stochastic gradient descent. If we want the whole distribution of likelihoods over s, we need to use Bayesian statistics, which involves doing complicated integrals, often done numerically using Montecarlo methods file:///home/guillefix/Dropbox/Oxford/Systems%20Biology%20DPhil/Research/schoelkopf.pdf Adaptive resonance theory
The primary intuition behind the ART model is that object identification and recognition generally occur as a result of the interaction of 'top-down' observer expectations with 'bottom-up' sensory information. The model postulates that 'top-down' expectations take the form of a memory template or prototype that is then compared with the actual features of an object as detected by the senses. This comparison gives rise to a measure of category belongingness. As long as this difference between sensation and expectation does not exceed a set threshold called the 'vigilance parameter', the sensed object will be considered a member of the expected class. The system thus offers a solution to the 'plasticity/stability' problem, i.e. the problem of acquiring new knowledge without disrupting existing knowledge. Natural extension of the join of two elements to an arbitrary Set of elements of a poset Interpreting the Partial ordering as "less than or equal", it can be understood as the least point that is greater than or equal to all the points in the set. On the Complexity of Finite Sequences A Universal Algorithm for Sequential Data Compression (LZ77) See also Algorithm to compute LZ complexity measure for implementation and explanation. Elegant Compression in Text (The LZ 77 Method) - Computerphile http://www.data-compression.com/lossless.html The LZ77 Compression Family (Ep 2, Compressor Head) LZW algorithm Complexity measure based on Lempel-Ziv algorithms (in particular on the LZ77). In fact the measure was proposed earlier, in 1976 on On the Complexity of Finite Sequences. It is defined as the number of tokens in the LZ77 algorithm. Algorithm to compute LZ complexity measure See also Descriptional complexity On the non-randomness of maximum Lempel Ziv complexity sequences of finite size Estimating the Entropy Rate of Spike Trains via Lempel-Ziv Complexity pdf https://en.wikipedia.org/wiki/List_of_life_sciences "The life sciences comprise the fields of science that involve the scientific study of living organisms – such as microorganisms, plants, animals, and human beings – as well as related considerations like bioethics. While biology remains the centerpiece of the life sciences, technological advances in molecular biology and biotechnology have led to a burgeoning of specializations and interdisciplinary fields." Life sciences - Elsevier, SicenceDirect Alan Hastings - Population biology Ben Goldacre-Bad Pharma_ How Drug Companies Mislead Doctors and Harm Patients-Faber & Faber (2012) Forced movements, tropisms, and animal conduct Singh S., Ernst E. Trick or treatment alternative medicine on trial 2008 The mechanistic conception of life - biological essays - Loeb, Jacques, 1859-1924 Lighting or illumination is the deliberate use of light to achieve a practical or aesthetic effect. Likelihood function, is defined as I.e. the probability of the data given the theory. One often considers the log-likelihood, which is just the log of the likelihood. See also Fisher information matrix A supertask refers to an infinite amount of actions in a finite amount of time. This is analogous to other "super"-things, like supersolids which are solids with a finite volume, but an infinite surface area (like Gabriel's horn, or other ones that don't have to be unbounded in linear size). Another mathematical object defined as a limit are space-filling curves, described in this video by 3Blue1Brown, that also explains the usefulness of infinite results in a finite world. Basically, infinite results are always described as a limit of a sequence of finite results. And these finite results themselves are useful. The concept of infinity is still useful because it allows to understand and summarize these finite results in simple ways. See Regression analysis. Use Matrix calculus for optimization: leads to normal equations (analytical solution to least squares), etc. See here Liquid crystals correspond to matter in non-isotropic phases, like the nematic, or smectic phases, but which don't have full crystalline order. See more in Principles of condensed matter physics book, and de Gennes and Prost, "The physics of liquid crystals". Liquid crystal theory I think derived following Landau's theory of phase transition, when given an order parameter: including terms that satisfy certain symmetries. where the order parameter, for uniaxial LCs, is: where is a scalar indicating the level of ordering (i.e. the variance of individual molecule's direction from the director field ). is the director (the direction in which the molecules point in average at a given point, where the direction is considered as a ray, i.e. and are physically equivalent. The free energy of distortion (per unit volume) of a liquid crystal has the form: where , , and are the elastic constants corresponding to the three types of elastic deformation that alter the long-range order in liquid crystals (and are thus opposed by elastic forces): Friederik transition People P.G. de Gennes (see his book on Physics of liquid crystals) lollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollollol http://kek.cat/
http://niceme.me/
http://nicemem.me/
http://nicememe.website/
http://nicememewebsite.website/ etc
http://meme.ai/
http://rarepe.pe/
https://wowsuchdoge.com/
http://www.nyan.cat/
http://trololololololo.com/
http://hristu.net/
http://leekspin.com/
http://www.shrekis.life/
http://www.ultimate.com/
http://trolololo.lol/
http://trololololo.lol/
https://www.youtube.com/watch?v=y_e6M0x3Lqw
https://www.youtube.com/watch?v=tKNhPpUR0Pg
https://www.youtube.com/watch?v=BQ8ZqF6JNfA
https://www.youtube.com/watch?v=VtmQ0vKPKqQ https://www.youtube.com/watch?v=2gmQYBZPV7g
https://www.youtube.com/watch?v=SwxdBiazu8M
http://pepefrogme.me/ https://www.youtube.com/watch?v=N3Q6TqtDyF0 http://www.dabadabadab.com/index.html
http://www.ooooiiii.com/
http://www.lalalaa.com/
http://www.iiiiiiii.com/ Statistical Mechanics of Systems with Long-Range Interactions (on first lect of this part)
book Book: Physics of Long-Range Interacting Systems Often take them to have two-body interaction of the form: where is the dimension. For , the systems are non-additive (or non-extensive), in the sense of Statistical physics, so that, for example, energy is no simply proportional to volume. See Power laws Another interesting quantity (here applied to networks, though applied to wealth distribution and elsewhere of course) is the fraction of ends of edges that connect to the fraction of nodes when ordered by their degree (i.e. the top percent of nodes, by degree). It can be shown that for scale free networks: The curves vs. are called Lorenz curves, after Max Lorenz. For example, for the World Wide Web links, and the curve shows that 50% of links go to the top 2% "richest" pages ("richer" meaning with higher number of links). Actually, as the WWW doesn't follow a perfect power law, the real number is closer to 1.1% This is related to Gini coefficients. More on power laws As a comparison, one can calculate the Lorenz curve for a exponential distribution, for example. Both and go like for large (i.e. small or ). Therefore the Lorenz curve (at its extreme) goes like , and so the top have just of the wealth. The typical plot however plots the income of the bottom %, i.e. , vs that percent from the bottom, i.e. %. Here is the resulting plot in WolframAlpha. This shows that indeed inequality is not exclusive at all to power law distributions. In fact the only distribution with a perfectly equal Lorenz curve, corresponds to when everyone has the same, so the distribution is a Dirac delta centered on a certain point. However, power law distributions often do show more inequality than exponential distributions. For instance, in power laws a typical situation is the famous "80-20 rule", by which the top 20% have 80% of the income. For exponential distribution, it can be seen from the plot that the top 20% has "only" 65% of the income. try different exponential distribution, do I get different Lorenz curve, I think I would! So this statement was not very meaningful.. What preferential attachment (and its resulting power law distributions) does is not make extreme events possible (they are possible in other networks), but it makes them more likely (power law decays less rapidly). In the preferential attachment model, this is because extremes are amplified due to the nature of the model. See Active matter, Microhydrodynamics, and Kinematic reversibility in fluid dynamics for more Zero reynolds number doesnt mean no acceleration. It just means that no force is needed to cause that acceleration In the zero Re limit, if the swimmer accelerates (say by varying the velociy of the corkscrew), and if it has a finite mass, the fluid will exert a net force on the swimmer, and thus the swimmer will exert a net force on the fluid, momentarily creating a Stokelet component. If we somehow had a small but very heavy swimmer with a large thrust too, it would then create a Stokelet velocity field for a significant period of time. Happel and Brenner book: Low Reynolds number hydrodynamics (book) Reciprocal theorem The reciprocal theorem allows one to determine results for one Stokes-flow field based upon the solution of another Stokes flow in the same geometry, i.e. having the same boundaries but different boundary conditions. See A physical introduction to suspension dynamics. See Artificial and machine intelligence and Artificial intelligence, Deep learning Building Machine Learning Systems with Python
– Machine learning in Matlab
–Lecture list of Andrew's course:
– lecture notes
– Andrew Ng machine learning course https://www.youtube.com/watch?v=UzxYlbK2c7E . On lecture 2
– Machine Learning - mathematicalmonk
– Machine Learning: A Probabilistic Perspective and here
– Machine Learning: Discriminative and Generative (The Springer International Series in Engineering and Computer Science) https://en.wikipedia.org/wiki/Generative_model Training data consisting on inputs and outputs. Want to find function relating inputs to outputs, to then be able to predict new outputs from new inputs. Need a way to represent the function approximation, with some parameters (the model): and a learning algorithm to find best parameters for the data. Two main types: New paradigm: Deep learning Cocktail party problem. Independent component analysis K-means Clustering Community clustering in networks Variations on supervised and unsupervised You are given a set of inputs , but you only have the corresponding outputs for some. You have to predict the for the rest (by learning the function for instance, like in Supervised learning. Like semi-supervised learning but the algorithm can ask for extra data, which it deems to be the most useful data to ask for. Basically loss-functions/costs used by the learning agent are based on Decision theory. See example here. To me it seems like the difference with supervised learning, is that you don't specify input, output pairs, but just outputs. You specify desired outputs, and undesired outputs. There is no input, but still the problem is not just trivial (i.e. it only ever produces one output), because the model is probabilistic. Sequence of decisions Reward function Used often in robotics. Go deep into the rabbit hole Good framework: Stan Deep Learning Lecture 5: Regularization, model complexity and data complexity (part 2) So the simplest model that works seems to work best most of the time. Seems like an example of Occam's razor, and thus related to Solomonoff's ideas on inference (see Algorithmic information theory). Epicurus principle also related to Bayesian inference, because we give a distribution over models, but we keep all of them. Hmm, also your error can't be smaller than the fundamental noise in the data. Well it can, but your model will at best be wasteful then. Try Torch: See AI and nanotech in Nanotechnology Deep Learning Applications in Science and Engineering neuronal networks in physics.. See dissertation by Nicholas Chen Neural networks modeling for refractive indices of semiconductors Computational physics: Neural networks The role of networks and artificial intelligence in nanotechnology design and analysis Advances in Machine Learning Research and Application: 2013 Edition Random idea for neural network for chemical synethesis and manufacturing etc. Facebook post: https://www.facebook.com/guillermovalleperez/posts/10153853693416223? Magnetohydrodynamics (a.k.a. MHD). GdR Dynamo 2015 (nice lecture series on MHD and related topics) See also other lecture courses in MMathPhys Note: Flux freezing does not imply a one-to-one correspondence between the magnetic field strength and the displacement field of the fluid , because the relation includes :
In waves in MHD, also changes, and therefore its effect is important. In particular note the MHD linear wave equation for : The second equation means that if the fluid gets compressed in the direction perpendicular to the magnetic field, the magnetic field increases in magnitude. This has to be the case because: Production of goods, by processing of raw materials. When manufacturing is done in the context of an economy, it's called Industry. Markov process with a discrete state space. Can have: Ergodic theorem for Markov chains Has applications in theory of Stochastic processes, and in Machine learning. In particular through a Hidden Markov model See also Finite state channel Order of a Markov chain. See here Markov subchains A subchain of a Markov chain is also a Markov chain Regular Markov chain here here See book Markov chains by Norris Chapman-Kolmogorov equation Visualization: http://setosa.io/ev/markov-chains/ For discrete space Stochastic processes Discrete time master equation For discrete time, probability to be in state at time is: where the are the transition probabilities (which can be expressed as a transition matrix). Continuous time master equation For continuous time, we can subtract from both sides of the discrete time equation, and divide by . Then where , and where for the bracketed part we used that probability is conserved (i.e. the particle has to go somewhere), , and in the second line we cancelled the terms from both terms. Solve using Fourier series, as if it is in a (discrete) lattice. For more general networks, Fouriers may not be appropriate.. You can then use eigenvector methods
Perturbation method to get approximate solutions to singular perturbation problems of differential equations, often when the small parameter is multiplying the highest derivative. Then the problem is of lower order, and will in general not be able to satisfy all the boundary conditions of the original problem If y is the solution to then one possible behaviour in such cases is that: In fluid dynamics these regions are known as boundary layers, in solid mechanics they are known as edge layers, in electrodymanics they are known as skin layers, etc. For this reason the subject of matched asymptotic expansions is sometimes called boundary-layer theory. Boundary layers can also appear in other circumstances, for instance when the perturbation converts a linear DE into a nonlinear one. See the example in question 2 here (in that case, the linear problem has a solution with a singularity at , but the nonlinearity makes . See solution in black oxford notebook. Trick for finding scaling in boundary layer: in the boundary layer is often significant (though not always!). We must increase (where ) until this term balances the largest of the others in the equation. Most elementary. You simply require where and refer to boundary layer, and middle (outer) solutions and variables. Van Dyke's matching 'rule' usually works (more powerful than Prandtl's) and is much more convenient than the Intermediate variable matching below. The rule is oooooooooooooooooooooooooooooooo I.e. in outer expansion, in the outer variables expand to terms; then switch to inner variables and reexpand to terms.The result is the same as first expanding the inner expansion in the inner variables to terms, then switching to outer variables and reexpanding to terms.. Hmm, but these expansions are expressed in different variables. I guess, as a last implicit step, I should convert to the same variables to compare. What's the justification of this rule? When using this matching rule you must treat log as because of the size of logarithmic terms. Most advanced and powerful of the methods. More tedious to apply too. Expansions for two "contiguous" regions should actually have an overlap or transition region where both expansions are valid. For example, suppose there is a boundary layer for for , but the expansion we find is actually valid for , i.e., the expansion breaks when becomes or larger. Suppose also, that the middle, or outer region is defined for , but the expansion is valid for , for . Then in any region with , for , both expansions are valid, and therefore should match due to the uniqueness of Asymptotic approximations. Note some terms jump order: a term in the examples in the notes comes from the inner expansion of the first-outer term, but it also comes from the outer expansion of the second-inner term. "First-outer" refers to first order in the expansion in outer region. The terms "inner and outer expansion" here refer to the expansion in terms of rescaled variables, but these terms are most often used for the van-Dyke rule where the "outer expansion" refers to the expansion of some term in terms of the outer variable, and similarly for the "inner expansion". The nomenclature he uses, is a bit confusing though. A composite expansion is an expansion that is valid across the whole domain. It built as , where are the solutions in the boundary layers, are the outer solutions, outside boundary layers, and are overlaps, removed to avoid double counting. The term removes the contribution from the inner expansion, when looking at the outer region(s); or the contribution from the outer expansion, when looking at the inner region(s). In practice, this can be done by substracting a term of the form at the right order. It is not unique, because it is not in standard Poincare form Boundary layers Think of the problem, and that to have the possibility of a non-trivial boundary layer we need some solution in the inner region which decays as we move towards the outer. In the problem considered in the notes, for example, the non-constant solution in the right-hand "boundary layer" grew exponentially as we moved to the outer, so there could never be a boundary layer at . Transition or interior layers Regions of fast change, not in the boundary, but in the interior of the domain. Finding the position of an interior layer can sometimes be hard. Non-linear boundary layers Boundary layer at infinity Revise these Example: van der Pol oscillator Revise this example pages 36-41 Materials science, also commonly known as materials science and engineering, is the Science and Engineering of material properties, their design, and uses. A material, I think, most often refers to a type of Bulk matter (identified by its composition in terms of phases and chemical composition). However it may sometimes refer to chemical substances per se, or other non-bulk, but relatively simple, arrangements of matter, as for example, in Nanotechnology. It may even be used for more complex arrangements of matter so that a bulk description is not totally appropriate, such as in "smart materials", where ideas from Complex systems may be necessary for their description. Materials science needs to describe the specific properties of each material. Constitutive equations play a fundamental role in the theory of these properties. The physics of materials is based on Condensed matter physics. If dealing with fluids, it of course uses Fluid mechanics too. Best materials course ever (mostly metals) See classification of materials in Condensed matter physics. Some important materials: Polymers, Metals, Ceramics, Composite material. See also Soft materials, Chemistry, Surface science Some material properties: See for a good resource on materials properties Variational Methods for Microstructural Evolution Some IUPAC definition recomendations: Things can make sense Mathematical logic is an essential part (if not the essential part) of the foundations of mathematics See Discrete mathematics, Theoretical computer science, Logic.. Video lectures: https://en.wikipedia.org/wiki/OpenMath A critique of OpenMath While I applaud the occasional successes in these ventures, the result have been unimpressive even from the range of computations routinely performed by computer algebra systems. They certainly represent a small scope compared to the kinds of mathematics human researchers deal with informally on computers. (Consider all the advanced mathematics routinely typeset by use of the program TEX.) My view is that much of today’s applicable mathematics, including that in ordinary texts and journals, is simply too informal tobe handled by the logical and algebraic means typically proposed by the con-structivists. Indeed, much of mathematical discourse goes beyond informality to be (unintentionally) ambiguous on its face. The ambiguity can generally be resolved by a sufficiently contextual interpretation, often requiring a reader to be skilled in the mathematical subdiscipline – not merely the notation – being represented.
Almost any ambitious computer algebra system that must eventually meet performance ex-pectations seems to abandon proofs or (complete) formal rigor
One person’s syntax is another person’s semantics
AugMath should be able to represent informal mathematics, by basing its philosophy in the notation, just like LaTeX itself, rather than in the semantics. Semantics can be added later as a layer... Get functions mathML from here: http://functions.wolfram.com/Bessel-TypeFunctions/BesselI/11/0001/ While all aspects of mathematics can be potentially applied. Mathematical methods refers to those parts of mathematics designed to be applied. This can also be called applied mathematics. One important sub-area is industrial mathematics, mathematics applied to industry. Special functions and their properties: http://dlmf.nist.gov/ Deep learning is an area of machine learning that studies learning algorithms with multiple levels of abstraction Why Deep Learning models perform so well? Seems to be a result of: Mathematical difficulty because: Nonlinearly, non-convexity (convex optimization or complex analysis techniques not available), many d.o.f. Neural network composed of neurons. Data into Dendrites, scales. Axom computes (apply nonlinear function) and propagates output trough synapse. A multilayer feedforward neural network. L+2 layers. L hidden The neural network is just a funciton from to , wlog..?.. Training, given dataset of inputs and outputs and want function to map these as well as possibly Use Loss function and regulariser (penalization on size of parameters. Could also try to maximize sparsity, Occam's razor, bias towards simpler model. Also makes surface more convex). Then minimize empirical risk. To minimize we use stochastic gradient descent. Assuming function as being continuity, differentiability, convexity. Can a multilayer feedworward network f approximate g arbitrarily well, for a very general g. Universality We can't expect f for the model considered (one layer) to approximate any g whatsoever, there are some very pathological functions.We can assume g is continuous, or just Lebesgue measurable (use this metric for defining closeness in this case). We can show then f can approximate g approximately well. Many other models are also known to be universal. Other minima. Loss surface is the surface defined by the empirical risk, EM.. The epigraph is non-convex. Local minima o EM are known to abound. Results: Other results: only a few parameters matter. The manifold hypothesis: meaningful data often concentrates on a low dimensional manifold, so large amounts of parameters don't matter. -—>See dissertation topic proposed by Ard Louis. Energy propagating from node i through path j Analogy between loss function of neural network and hamiltonian of spin glass. (Multilayer: composition of functions.) See this combination of Machine learning (in particular Natural language processing) and Computer algebra. http://www.parsegon.com/ Maple Mathematica http://www.cinderella.de/files/HTMLDemos/ http://www.geometrygames.org/KaleidoTile/ Geogebra http://mathgl.sourceforge.net/doc_en/Main.html Cinderella Cool surfaces software: https://imaginary.org/program/surfer Mathematics is the study of structures themselves. These are necessary in Science and in Art, as both require the invention of structures to either explain (and thus understand) the world, or for any other purpose (in the case of Art). Mathematics, however, doesn't concern itself with the purposes or details of particular structures; rather, it commits itself with the abstract properties common among many structures. It is both the Art and Science of the structure of structures. It studies many structures in the world, and creates an abstract structure to understand them. In this sense it is a Science. It also creates new unobserved abstract structures, often, generalizing observed ones. In this sense it is an Art. Mathematics is sometimes called formal sciences. Useful resources and tools https://en.wikipedia.org/wiki/Category:Mathematics_portals http://www.msri.org/web/msri/online-videos Books People http://math.ucr.edu/home/baez/ http://euler.nmt.edu/~jstarret/ Other links: http://mathgl.sourceforge.net/doc_en/Main.html http://www.theshapeofmath.com/princeton/dynsys https://www0.maths.ox.ac.uk/courses From http://bactra.org/thesis/single-spaced-thesis.pdf : Formalizing intuitions (Quine 1961) insist, the goal [of formalizing some notion] is that the formal notion match the intuitive one in all theeasy cases; resolve the hard ones in ways which don't make us boggle; and let us frame simple and fruitfulgeneralizations. A network is a collection of nodes joined by edges. More generally, it is a collection of elements and their interactions. Most of the time, it has
the same mathematical structure as a graph, , defined as an
ordered pair , where: etc.) However, by interpreting an edge as a more general kind of relation, its
mathematical structure can be a hypergraph. One can also have different types of vertices and edges defined for a network. A simple network is a binary, undirected network that only has a
single edge between a pair of nodes (i.e. no multi-edges), and
doesn't have self-edges (a.k.a. self-loops). Representations: Edge lists, adjacency matrices(a.k.a. network matrix). Adjacency matrix .
if undirected. describes same network if we permute columns and rows in the same
way. Weighted adjacency matrix (or weight matrix) assigns a weight
to edges. Usually weight is a real number: "Topology" represented by . "Geometry represented by . Cocitation and bibliographic coupling in directed networks Two useful matrices, derived from the directed network adjacency matrix
are the following (both can be used to define adjacency matrices
that are symmetric and thus undirected! easier to
analyze): Cocitation matrix: . Nodes related if there is a node that points to both. Bibliographic coupling matrix: . Nodes related if there is a node to which both point. Simple network, described above. Acyclic networks have no cycles. A Directed Acyclic Graph (DAG) is a well known sub-type. Hypergraphs are sets of
elements with
relations
that include more than a pair of elements (i.e. they are members of a
higher cartesian product). Hypergraphs can equivalently be represented as Bipartite Networks, where there are two types of edges (a special case of a multipartite network, where there are many types). On the other hand, a multiplex network is one that has multiple types of edges. Trees are connected (can reach all
vertices following edges), undirected networks that
contain no closed loops. A forest is a disconnected graph whose
connected parts are trees. A Planar network is a network that can be drawn on a plane without having any edges cross. It is a special case of a Spatial network. Temporal networks are those for which the set of edges and/or nodes varies with a time parameter. A Similarity network is one that expressed how similar entities (expressed as the nodes) are. The degree of similarity being the weight of the node. The degree, , of a vertex, , is the number of edges connected to the vertex. Paths A path in a network is a sequence of of nodes such that every pair of nodes in the sequence is connected by an edge in the network. Definition of path, cycle, trail, circuit
Definition is extended to directed case by only permitting traversing
in the direction of edge. Note only directed graphs can have 2-cycles. Components A component is a subset of the network for which all pair of vertices have at least one path, and which is maximal (i.e no extra nodes can be added that preserve this property). Independent paths, connectivity, and cut sets Number of independent paths between two vertices (the connectivity) gives measure of how strongly connected they are. Paths can be vertex-independent if they share no vertex (other than starting or ending vertices), or edge-independent if they share no edge. A vertex (edges) cut set is a set of vertices (edges) that if removed will disconnect a specified pair of vertices. A minimum cut set is the smallest such set for the vertices. The graph laplacian is a useful quantity, derived from the adjacency matrix, which can be used to describe diffusion precesses in a network, as well as in problems of random walks, resistor netowkrs, graph partitioning and network connectivity. A random walk is a path across a network created by taking repeated random steps. They are usually allowed to traverse edges more than once, and visit vertices more than once. If note it is a self-avoiding random walk. They are mathematically connected to resistor networks. Matroid theory A matroid is a structure that captures and generalizes the notion of linear independence in vector spaces. There are many equivalent ways to define a matroid, the most significant being in terms of independent sets, bases, circuits, closed sets or flats, closure operators, and rank functions. Matroids as a Theory of Independence by Federico Ardila See books on matroid theory (See Arrival of the frequent for context) See also Wright-Fisher model The Hamming distance (i.e. the number of differing letters, or mutations) is then distributed binomially: The expected number of individuals with genotype that arises at generation can be written as: where is the probability that a -fold mutation of genotype (selected for reproduction according to fitness ) generates an individual with phenotype . It takes into account the genotype-phenotype map. is the genotype of the th member of the population, with a total of members. See derivation of this below: As the number is distributed binomially, the average number is . Then we define . Furthermore, = = = . Finally, = By fine-graining the transitions from to a phenotype- genotype into transitions with particular mutation numbers , we can write , recovering Eq. 1 [#[manual links]] (try to upgrade TW to make this work) The actual number of individuals with genotype will follow a binomial distribution (as explained for a simple case in Wright-Fisher model), with probability , and number of trials . The probability of none of the offspring having phenotype is: , the approximation holds for large , and may be seen as approximating the Binomial distribution by a Poisson distribution. If we assume that , i.e. the average number of mutations per genotype is very small, then for all , and ( while of course). With the above assumption that , . Also, , if . Next, if we assume, , for all with mapping to phenotype (i.e. in space ), and that it all starts within , we have We can also define the averaged {expected number of offspring with phenotype at one generation, which inherited from genotype at the previous generation via a single mutation}, i.e the average of , over all in . We will write abuse notation, and use the label in to label a genotype in , so that . The average is then: Furtheremore, we should note that, as (and a similar expression for the dependent quantities). When , we find , and also, for example, that , where . Thus is the average of this probability. We also define the robustness of phenotype , as equal to the average probability over all of a neutral mutation (i.e. one from to ). Under the approximate assumptions above, . If we assume also that the population is large enough (more precisely, we are in the Polymorphic limit (Wright-Fisher model)), we can use a mean field approximation: approximate by . This approximate works best if the population is large enough that most of the neutral space is populated (or in the author of the paper word's "1-mutant neighbourhood of the population is similar to that of the whole neutral space"). Using this in Eq.2: Statistical field theory that ignores fluctuations. I.e. just describes the behaviour of the mean quantities of interest. Can get such behaviour by applying the method of steepest descents to the partition function. Examples Bragg-Williams theory for binary alloys or Ising model (similar to above). Curie-Weiss theory for the paramagnetic-ferromagnetic phase transition. See Measure theory A measurable function between two sets and , belonging to Measurable spaces , and , is {a Function , s.t. for any , the Preimage of is in }. I.e. the preimage of any set in the Sigma-algebra of the co-domain is in the Sigma-algebra of the domain. See Measure theory A space consisting of a set , and a Sigma-algebra . A measure on a set , with Sigma-algebra , is a Function , s.t. Specifying a measure on a sigma-algebra is simplified by the A Measure-theoretical dynamical system is comprised of: This space can be considered, without restriction to be a Probability space. See Amigo's book. A Dynamical system on a Measurable space has a natural or physical invariant measure, corresponding to the Probability measure that numerical simulations of the system would produce asymptotically. If we know the structure of a network, then we can calculate a number of quantities or measures that capture features of the network topology (and geometry). Originally, a lot of these ideas were developed for social network analysis, but they are used elsewhere now too. Trying to answer: "Which are the most important or central vertices (or edges, or other substructures) in a network?" Degree centrality Simply the degree of a vertex can be used as a measure of its centrality. The eigenvector centrality (first defined by Bonacich in 1987), is defined by: where is the vector of centralities, and is the largest eigenvalue of A node can be important because it is connected to many nodes, or because it is connected to important nodes, or both. Katz centrality solves the problem posed above by giving all vertices a "free" centrality: There is one potentially undesirable feature of Katz centrality. An important vertex pointing to many vertices makes all those vertices important. The centrality gained by virtue of receiving an edge from a prestigious vertex is diluted by being shared with so many others (think a web directory like Google or Yahoo! pointing to my page. My page is not that central because it's just one of millions). We can solve this by making the centrality derived from neighbours be divided by their out degree: which is the basis for PageRank Hubs and authorities (Network theory) One can distinguish two types of important nodes in directed networks. We describe them for the case of an information network, like WWW first: This idea was implemented by Kleinberg into the hyperlink-induced topic search or HITS algorithm. Closeness centraliy of node i is the mean geodesic distance to all others nodes in he network. A variant is exponentially weighted closeness centrality: where is the geodesic distance between node and ; and is the connected network component reachable from (except for ). Main disadvantage is its often very low dynamic range (range of values it takes) There are also problems when there are disconnected components. One way is to define closeness centrality over only connected nodes, or to use harmonic mean (mean of reciprocals, ignoring self distance, as it's 0). Measures the extent to which a node (or edge, or other substructure) lies on paths between other vertices. These paths can be defined in many ways, but often they are taken to be geodesic paths. Many networks naturally divide into groups. These are substructures that are prominent for some reason. Simple examples are cliques, plexes and cores. There are also generalizations of components called k-components. Transitivity Transitivity (a property of mathematical relations) in a network is usually applied to the relation "is connected by an edge". So a network is transitive if for every u connected to v and v connected to w, then u is connected to w. One can define the clustering coefficient, , as a measure of "how often" transitivity holds in the network: Reciprocity For directed simplest graph the smallest loop size is two, instead of three, and thus one often measures the frequency of length-2 loops. This is called reciprocity (see Transitivity for more comments). Pairs of reciprocated edges (that is edges from i to j where there is also one from j to i) are sometimes called co-links. The reciprocity is defined as the fraction of edges that are reciprocated, and this turns out to equal . Signed edges and structural balance Signed neworks have signed edges, that is their edges can have an associated weight (like friendship) or (like animosity). Structural balance refers to the situation when the network contains only loops with even numbers of minus signs. This is so that the (naturally generalized version of the) rule "the enemies of my enemies are my friends", and "the friends of my friends are my friends". This is similar to the concept of "frustration" in spin networks Harary's theorem tells us that all balanced networks are clusterable, i.e. they can be divided into groups with only positive connections within groups and negative between them. Proof given in Newman's book and gives further intuition of concept of balance. How can we measure the "similarity" of two nodes (or edges, etc.)? Two main approaches. Two nodes may be: Homophily or assortative mixing Homophily or assortative mixing is a bias in favour of connections between network nodes with some similar characteristics. In mechanics, we describe the motion of bodies, and the causes that effect them. This includes the special case where the "motion" is no motion, i.e. bodies are stationary. The description of the motion itself is called kinematics. This just sets up the relevant degrees of freedom, represented as variables in a relevant mathematical form. The description of the causes, and how these causes effect the motion is called dynamics. These causes are often divided into forces and torques. This description relates the variables describing the motion above, to forces, which should depend on those variables themselves. This means that in dynamics we often have closed equations that we can solve in full generality. Another division of the areas of classical mechanics, used mostly in engineering leaves the definition of kinematics the same, but what we referred to as dynamics above is called kinetics Dynamics then refers to mechanics applied to proper motion only (i.e. not including stationary case). In other words, dynamics is the kinematics and kinetics of proper motion. Mechanics applied to the stationary case is referred to as statics. In other words, statics is the kinematics and kinetics of static equilibrium. See the mechanical universe https://en.wikipedia.org/wiki/Mechanistic_target_of_rapamycin A Kinase that regulates cell growth, cell proliferation, cell motility, cell survival, protein synthesis, autophagy, transcription. It's signalling circuit has been studied, for instance as an example of GP map bias: Evolvability and robustness in a complex signalling circuit. A meet, is an operation defined on elements of a poset (not necessarily all of them) defined as: The join (or Greatest lower bound) of is an element such that: Note that, if it exists, a join is necessarily unique. See also Lattice (algebraic structure) A membrane protein, or membrane-bound protein, is a protein bound to the Cell membrane a.k.a. memory management The Operating system allocates memory to processes, so that a process can only access that portion of memory. This memory is divided into: The addresses that the program uses to reference variables are actually Virtual memory addresses, which the operating system translates to physical memory. https://en.wikipedia.org/wiki/Memory_management What and where are the stack and heap? http://stackoverflow.com/questions/18446171/how-do-compilers-assign-memory-addresses-to-variables This tiddler is about this TiddlyWiki itself. https://github.com/Jermolene/TiddlyWiki5/issues/2180
plugin or feature request: inner-tiddler-anchors Manual links (for inter-tiddler links)! Check how to implement this, maybe I need to Upgrade TW? ServerCommand hmm? Nice too! Codemirror editor Install this In here I have an example of a workaround to get javascript working on there. Even though script tags are supposed to be removed, they aren't when inside an Also should think of adding jquery, via a custom plugin.. Trick to embed stuff from other pages using iframes (and position the embedded content correctly). See example Apollonian gasket Font Awesome: $:/plugins/TheDiveO/FontAwesome/fonts/FontAwesome TW on Font Awesome for TW $:/core/modules/macros/testMacro This is an example of a (global) Javascript macro, as one can define local ones. To use it, do in any tiddler, where this isn't the name of the tiddler, but the name defined inside it in LaTeX test: Font Awesome Test: Waving flag: A material made of atoms bonded by metallic bonds (see Chemical bonds) A metallic element is one that forms a metal when in its pure solid state. Pure metallic crystalline solids are found almost always in either BCC, FCC, and sometimes (HCP) crystal arrangements. Furthermore, BCC, and FCC appear only in metals, at least, when looking at pure element crystals (see Periodic table (crystal structure) Study of the nature of Nature. See Philosophy for the basis of my metaphysics. Basically my ontology is based on two levels, depending on certainty (not sure if these are the best names): The physical world is based purely on primary substances (concrete things of the physical world. Other things are just emergent properties of it, including us and our thoughts. Those thoughts is where abstract Concepts, Knowledge I think a good way to approach metaphysics is via Systems theory, and Science Decartes, Leibniz, Spinoza A powerful JavaScript framework for both Frontend web development and Backend web development Assume functions in the asymptotic expansion depend on through variables, corresponding to different time scales: , , , etc. as with real. Uses Riemann-Lebesgue lemma: Riemann-Lebesgue lemma ... Useful also when doing integration by parts for Asymptotic approximation of integrals See statement on notes Method of stationary phase Split integral into region close to stationary phase point(s) and the rest. Then it's similar to Laplace method See example in notes.. Important notes as For , are generally complex, and the integral being along a complex contour in general. See justification in notes.. Also: Handouts from lecture Steepest descent contour refers to the contour of steepest descent of , the real part of . That is, the contour parallel to its gradient, . This is because, for an analytic function, is perpendicular to , so that the steepest descent contour is also a contour of constant imaginary part of . This later condition (together with others, depending on the problem) is often used to find the contour. The other conditions may be: Remember that when deforming the contour we must include the contribution from any poles that we cross. Example: Steepest descents on the gamma function Example: Steepest descents on the Airy function Both of these are moveable saddle problems, so we first need to rescale variables, so that the saddle is fixed. Revise Watson lemma, and examples A metric on a Set is a map (i.e. from the Cartesian square of to the Real numbers), that satisfies the conditions: A.k.a. microfluidics. See Complex fluid dynamics. For fluid at even smaller scales (nanoscales), one talks about nanofluidics. Microfluidics: Fluid physics at the nanoliter scale Nanofluidics, from bulk to interfaces Encyclopedia of Microfluidics and Nanofluidics https://scholar.google.com/scholar?cites=17428969843030616661&as_sdt=2005&sciodt=0,5&hl=en See Phoretic mechanisms of colloids. Colloid Transport by Interfacial Forces Osmotic effects Diffusioosmosis, thermoosmosis, etc. Microtubules (micro- + tube + -ule) are a component of the cytoskeleton, found throughout the cytoplasm. These tubular polymers of tubulin can grow as long as 50 micrometres and are highly dynamic. The outer diameter of a microtubule is about 24 nm while the inner diameter is about 12 nm. They are found in eukaryotic cells, as well as some bacteria, and are formed by the polymerization of a dimer of two globular proteins, alpha and beta tubulin. (https://en.wikipedia.org/wiki/Microtubule) Some times of Molecular motors walk along microtubules to transport molecular cargo inside a cell. They are the primary component of the Spindle which separates chromosomes during cellular division. Microtubule turnover: the process by which microtubules decay, and are replaced. "Turnover" can also refer to the rate of this process. See turn over (definition): To be replaced by something else of the same kind. See here. Also microtubules page. A complex system that has features associated with sentient, intelligent and conscious beings. It is capable of thought and emotion (see Philosophy, Cognitive science, Philosophy of mind). It currently only physically realized in the Brain, but Computer-based versions are very likely possible, and are part of the transhumanist vision. Inspiration from neuroscience -> neural networks. Convolutional network. Matthew Zeiler & Rob Fergus Supervised vs unsupervised. A good principle for learning is for the machine trying to reconstruct the things it wants to learn using its neural net. If what it reconstructs doesn't agree with what it then sees, it should learn. This sounds like learning by imitation. Regularity helps.. Multimodal, learning combining different kinds of data Sequence learning and recurrent nets: have memory, can predict sequences (in time say). Can parse words, and they show that grammar can be learned. Being able to fill gaps in the information you receive (like our brain does, or like machines do with generative models, which also learn) is useful for decision making, as you can know what to expect, even with incomplete info. Siamese neuronal network Q Reinforcement learning Imitation learning Back-propagation.
Outcome: Distinction (First class honours). Exam marks BACKUP LECTURE NOTES Courses to take and course requirements Combined timetable No soft matter on Tuesday. Instead Friday at 12am Mathematical institute classes The Oral Presentations will take place in week 5 of Trinity term not week 4. The Trinity term mini-projects will be released at 12noon on Monday of week 6 and are to be submitted by 12noon on Monday of week 9 (rather than weeks 5 and 8 respectively). The Trinity term take-home-exams are to be released at 12noon on Monday of week 9 of Trinity term and are to be submitted by 12noon on Wednesday of week 9 (rather than in week 5). In particular I looked at networks formed by the Physarum polycephalum, when connecting food sources. I used a mathematical model of these networks and looked at their features. They turn out to perform rather well under metrics of efficiency and robustness. They also display typical features of Spatial networks, in particular Planar networks. See Physarum machines and physarum solver, and project in Overleaf. See code in Dropbox. In particular, on the Relations between the stability of Boolean networks and percolation On the Duffing oscillator The effects of small damping, nonlinearity and forcing on a harmonic oscillator: There are potentially qualitatively different forms of the equation, depending of which combination of the parameters considered are non-zero. The Duffing Equation: Nonlinear Oscillators and their Behaviour The presentation should not be longer than 20-25 minutes and there will be a 5-10 minute discussion session
after the presentation. You are free to choose whether you want to give a blackboard presentation or use
slides. Timetable My presentation is on Friday 27th May in L5, at 12:30. Practice it Contingency, convergence and hyper-astronomical numbers in biological evolution Notes on Ard Louis' paper on contingency, convergence and hyper-astronomical numbers in biological evolution
Hunting Darwin's Snark: which maps shall we use?
The effect found in many Genotype-phenotype maps by which some phenotypes have many more corresponding genotypes, than other phenotypes. This effect is important in Evolution See MMathPhys oral presentation –The structure of the genotype–phenotype map strongly constrains the evolution of non-coding RNA pdf. Notes on the RNA GP map bias paper Evolutionary Robotics and computing, uses GPMs. See Evolutionary computing and Optimization .. See References from Complex Behavior in Evolutionary Robotics book An effect, where effectively large neutral spaces are also favoured, but in equilibrium, not out of equilibrium as in the Arrival of the frequent More Convergent evolution as natural experiment: the tape of life reconsidered Genotype is the weights of the NN, phenotype is the function the NN approximates. NNs are expected to find "simple" functions much easier then I suppose. In other words, they are able to recognize patterns much more easily if there is actually a pattern (in the sense of a simple pattern..) Models that describe the processes by which a network forms or is generated are often called generative network models. One of the most famous ones is the "preferential attachment" model. related to "rich get richer" idea in economics (Herbert Simon). Preferential attachment (also called cumulative advantage in older literature) refers to the idea that new nodes in a network preferentially attach themselves to some nodes in the existing network rather than others. The attachment is described in terms of a probability distribution over existing nodes for the creation of an edge. The preference is described by an attachment kernel, , which is the probabilistic weight of node . The probability that a new node connects to existing node is thus: Different preference types can be considered, the main categories being: The attachment kernel is then generally a function of these: . Note: We need a seed network (initial condition), to get any network out of this model. The network will eventually be independent of the seed, but this can take a very large number of nodes , sometimes in the order of billions. Proposed in the study of citation networks. The main assumption of the model is that the probability of each new edge created whew we add a new node only depends on the degree of that node (on the in-degree to be precise, i.e. the number of citations it has). In particular it assumes an affine preferential attachment: One can write a master equation for the degree distribution, which has a steady state (i.e. behavior given by power-law decay with power: . Thus, many scholars believe that this simple model may describe the fundamental mechanism by which power laws are obtained in many real-world networks. Almost a special case of de Solla Price model, but with new assumptions: Degree distribution as a function of time of creation Nodes that were added earlier to the network have had more time for new nodes to attach to them, and thus in average have higher in-degree. This can be shown by starting with a new quantity: the fraction of nodes (in average over the ensemble, so effectively the probabilty ) that a node was created at time and has in-degree when the network has vertices, . The "time" $increments by every time we add a node, and thus effectively labels nodes, in the order by which they were added. One can then write a master equation, noting that no nodes have , except the new node which has , and also in-degree . However the fraction of nodes having any being created at any particular time goes to as , and so we change variables to a probability density in by dividing by . We also rescale time by dividing by for convenience, and to properly convert the master equation into a differential equation. Sizes of in-components Can also derive a master equation. See homework problem 4 Kleinberg et al have proposed a model where new nodes imitate the out-edge configuration of an existing node. This is done by linking to some of that edge's neighbours, while the rest of connections are to randomly chosen nodes in the network. In particular, we first choose a node uniformly at random, and then go through its edges, copying it with probability , or ignoring it and choosing a node at random with probability . Remarkably, the expression for the fraction of nodes with degree when node size is has the same form as in Price's model, but with an give by an expression depending on , and thus it also follows a power law. The networks still differ in other structural aspects, in particular regarding correlations. This model reminds us that just knowing the degree distribution, doesn't tell us the mechanism that gave rise to it. We need more information to make this inference. In some biological networks (metabolic and protein-protein networks) vertex copying seems to be the most probable explanation for observed power law distributions. The mechanism by which this happens is gene duplication (by which, when copying DNA, a gene is duplicated by mistake) and point mutations (a mutation of a single base pair). This, through evolution creates different proteins, which (due to their common origin) are still similar and have a lot of protein-protein interactions in common. Observations of power law in protein and metabolic networks: Lethality and centrality in protein networks The large-scale organization of metabolic networks Proposed models A Model of Large-Scale Proteome Evolutionhttp://www.santafe.edu/media/workingpapers/01-08-041.pdf Modeling of protein interaction networks An alternative networks may "form". Often these are rationally created networks to optimize toward some goal. Travel time and cost trade-offs A good example is: airline networks where a compromise between lowering cost (so having more central hubs and spokes to fill planes more fully, than flights between two minor destinations) and length of travel (to satisfy customers) is sought. Ferrer i Cancho have one such simple model to find compromises between mean geodesic distance (travel time) and number of edges (cost). They find interesting regimes with local minima with trees with exponential distributions, passing through trees with power law distributions, and finally star graphs, as they varied the parameter controlling the relative importance of the two compromising variables. However, for most values of the parameter, the global minimum was actually the star graph. An alternative model that shows interesting behavior in the global minimum too, by assigning an actual geometric distance to the edges (so that it is a spatial network, see MMathPhys miniprojects.Networks). Depending on whether they assigned more importances to travelling times, or to waiting times at nodes, they got more road-like networks (waiting times at intersections negligible) or more airline-like networks (waiting times significant). See recent research: Like air traffic, information flows through neuron 'hubs' in the brain, finds IU study Assembly of Bacterial Ribosomes David Odde - Microtubule Self-assembly List of protein structure prediction software My 65 years in protein chemistry Programming the Emergence in Morphogenetically architected complex systems Protein folding problem 50 years on A review of recent advances in ab initio protein folding by the Folding@home project Molecular physics is the study of the physical properties of molecules, the chemical bonds between atoms as well as the molecular dynamics. It is closely related to Atomic physics As can be shown again by approximating sum by integral, all the moments of a power law distribution diverge for . Of course, this is in the limit of an infinite system with the same distribution, in finite systems (as in networks with a finite number of nodes), the moment will of course be finite (for a network, will have a maximum value, cutting off the domain of the integral used to calculate ). (See context at the Arrival of the frequent). Neutral spaces can be astronomically large, much bigger than even the largest viral or bacterial populations (see this paper). In that case, the local neighborhood of the population may not be fully representative of the neighborhood of the entire space. This scenario can most easily understood in the monomorphic limit: when mutants are rare, Now, the (average) rate of neutral mutations (per individual) is , as is the probability that a mutation is neutral. See more in the Monomorphic limit (Wright-Fisher model) tiddler. Furthermore, Kimura showed two things relating to fixation (see Population genetics): Now, when mutations are rare enough (that the same mutation occurring twice simultaneously is very unlikely), a mutation will initially just have a frequency . This fact, combined with the above results imply two things: The second point means that we are in a situation where the population fixes to a particular genotype in , in the relatively fast time-scale , and stays there during the much longer time , before it fixes to a new genotype. +++++++(...)++++++ Short term correlations refer to: p-type individuals are being sampled from the same set (the set of p-types in the 1-neighbourhood of the currently fixed q-type genotype which most of the population has) throughout the time that the population is fixed to a particular genotype. When the population (relativelt quickly) transfers to a new genotype, the p-types produced are now sampled from a new set, but still all of them from the same set. The fact that they are sampled from the same set in inter-refixation times (tau_f), means they have correlations that last tau_f in average ("short-term") If fixations occur much before the set of p-types in 1-neighbourhood is explored, these correlations are no longer observed. As our evolutionary process is a Markov process, the first discovery time of a neighbour genotype as well as the arrival time of the neutral mutant ‘‘destined’’ to be fixed, are distributed geometrically (or exponentially in a model with continuous time). Thus the mean of these times are equal to the respective standard deviation, and we have large fluctuations. The geometrical distribution comes about because the Markov property implies that one can define a probability for each of the two events above ({discovery of all neighbour genotype}, and {arrival of the neutral mutant ‘‘destined’’ to be fixed}), and then, each generation corresponds to a Bernoulli trial, and first arrival times follow a geometric distribution. For example, the probability of {arrival of the neutral mutant ‘‘destined’’ to be fixed}, is approximately (valid when , which we assume. This ultimately comes from the fact that {when the probability of an event is small the average number of times it occurs on a set of trials, is approximately the same as {the probability of it occurring any number of times}}. Essentially when . See Probability theory too). The continuous time approximation: the mean {generation of first success, } is fixed to (where is the prob. of success in Bernoulli trial). We rescale the time variable as , and the mean is , where is the reciprocal of the time step (i.e. the time we define that a generation lasts). The geometric distribution becomes . Now, is the time scale to find all the 1-neighbour genotypes. If is the number of mutations that can take to a . Then, is the time-scale to get a mutant from . This is because, is the probability that {a mutation from leads to }. The mean will be of this same order (and I think equal actually). Therefore the time to {first get {a mutation from leads to }}}, , is distributed according to , where is a normalization constant. Therefore, the {probability to get {a mutation from leads to }} in the a time (the time between two consecutive fixations)} is . Integrating over the distribution of , we have the probability that phenotype is discovered before the next neutral fixation: For (large population limit): We can apply a mean-field approximation to the monomorphic limit. Let be the probability that a genotype in has the given value of . Then , if we assume for . Then , where is the average of . For (large genome limit), . In particular, . Then . Finally, is {the probability that phenotype is discovered before the next neutral fixation}, i.e. the probability that the {number of times {[phenotype ] appears} before the next neutral fixation} is greater than , which is approximately the same as {the average number of times [it] appears}, if {{the probability that {[it] appears in one generation}} is small}, which is the case as {in the monomorphic limit, mutants are rare, }. Then, is the average of this quantity, which we use in the mean-field approximation. Then, following the same derivation as in Polymorphic limit (Wright-Fisher model), we have where is the (mean) duration of each "step" (corresponding to going from being fixated to one genotype to another). Now, {the average number of mutations from a genotype in leading to phenotype } can be expressed as , as is the mean probability that {a single-point mutation from a genotype in leads to phenotype }, and is the number of single-point mutations. Now, we can find at the two limits of interest: See Simplicity bias in finite-state transducers The graph structure of a deterministic automaton chosen at random See Random deterministic automata On the entropy of a hidden Markov process On Grammars, Complexity, and Information Measures of Biological Macromolecules Activities and Sensitivities in Boolean Network Models Complexity theory
– Descriptional complexity –>Entropy and complexity of finite sequences as fluctuating quantities –>Lempel-Ziv complexity analysis of one dimensional cellular automata Coding Theorems for Individual Sequences . His complexity measure looks very similar to the topological entropy defined here. http://arxiv.org/pdf/1512.04270v2.pdf . -machines reconstruction or computational mechanics, is a powerful tool in the analysis of complexity, which has been used in a wealth of different theoretical and practical situations. Entropy of Hidden Markov Processes and Connections to Dynamical Systems: Papers from the Banff International Research Station Workshop –
Codes, Systems, and Graphical Models –
An Introduction to Symbolic Dynamics and Coding –
Fundamentals of Codes, Graphs, and Iterative Decoding –
Topological Entropy and Equivalence of Dynamical Systems –
Symbolic Dynamics and Its Applications –
Ergodic Theory and Topological Dynamics of Group Actions on Homogeneous Spaces –
Substitutions in Dynamics, Arithmetics and Combinatorics –
Combinatorics on Words –
Fractal Geometry, Complex Dimensions and Zeta Functions –
Dynamics and Randomness Resolving Markov Chains Onto Bernoulli Shifts Via Positive Polynomials Complexity of strings in the class of Markov sources – citations On the entropy of a hidden Markov process – citations lempel ziv complexity finite state channel
lempel ziv complexity markov model Check what these are! epsilon machines Random matrix product Capacity of finite state channels based on Lyapunov exponents of random matrices Basic properties of the projective product with application to products of column-allowable https://en.wikipedia.org/wiki/Morphogenesis "How the tiger got its stripes." Turing foundational paper Reaction-diffusion equations Computer simulation of reaction-diffusion equations Xmorphia Nice exploration of the Gray-Scott reaction-difussion DE. https://en.wikipedia.org/wiki/Mpemba_effect warm freezes faster http://www.eoht.info/page/Erasto+Mpemba O:H-O Bond Anomalous Relaxation Resolving Mpemba Paradox Why Hot Water Freezes Faster Than Cold—Physicists Solve the Mpemba Effect Mechanisms Underlying the Mpemba Effect in Water from Molecular Dynamics Simulations See Deep learning good for generalizing models, transfer learning, multi-task learning. Good when don't have much supervision data. Max-margin: learning a function that identifies sensible data (e.g. sentences that make sense), thats what we do with the algorithm he explains of finding a prob dist bigger at the data points that "anywhere" else. This will, in particular, make the NN learn a good representation of the data, or embedding. For this we use hinge loss. In practice, we do this Learn embeddings in one task and transfer these to solve new tasks Example. He exaplains how deep multi-instance learning works. Nice Example: Bi-lingual word embeddings When you can't corrupt the data: Siamese networks Paper Example: Question answering system. Followed by relation learning (learning triplets like "cat eats mouse") memory networks (see below) may be useful for transfer learning too.. One-shot learning using conv nets, as we've already have good embeddings, just compare objects in embeddings. See beginning of this Review paper: https://arxiv.org/abs/1309.7233 When a set of entities interact with each other in complicated patterns that can encompass multiple types of relationships, change in time, and include other types of complications. Such systems include multiple subsystems and layers of connectivity. The structure and dynamics of multilayer networks Some types See paper for details: A network in layers, and with connections between layers; the interconnections between layers are only between a node and its counterpart in the other layer (i.e. the same node). Introduction to Processes & Threads Processes are divided into threads, that each has their own Call stack, but which however, share the memory (owned by the process). This can make programs more efficient. For instance, microsoft word may be a single process. However, it may have a thread for reading input, and one for writting to files, and another one for printing to screen. Concurrent programming designs the program so that these threads may be running for the duration of the process, instead of switching between them. This abstraction of concurrent threads allows for easier design of many large programs. However, it creates some challenges to keep synchronized execution, so that actions between different threads don't mix up. For instance, a thread may begin writting ot some object in memory, and the scheduler switches to a different thread, which now begins to write to that object. The result of this may not be as desired, if one didn't take this possibility into account.. A thread that is independent can be called a deamon.. Classical and orchestral music Beethoven Mahler. Symphony 5 Symphony 1 https://www.youtube.com/watch?v=eqfmwxakJEk The Greatest Waltzes of All Time Best acappella: Carmel Acappella Electronic music Stimming Haywyre: The Voyage Full Album Concentration \ Programming Music 001 (part 1) https://www.youtube.com/watch?v=eHOb7IJ6Bk0 https://www.youtube.com/watch?v=s1lvEVg1T9k https://www.youtube.com/watch?v=uQ5FfZ1wKSA https://www.youtube.com/watch?v=WXiJBR5i9bM Experimenting with alternative tunings: https://soundcloud.com/roberto-la-forgia/sets/beauty-in-the-beast-by-wendy-carlos-1968 nice psychedelic music: https://www.youtube.com/watch?v=wzLLPccLjlk Space ambient Stellardrone. https://www.youtube.com/watch?v=7OQx3dMjBMQ&list=PLmGEbmwqAA4IYqCuH3bHzTVVtdpG6N4IJ&index=4 https://www.youtube.com/watch?v=1iKA2wJp97s Vangelis. Cosmos, voices To the unknown man, messages, ask the mountains, conquest of paradise, alpha, entiends vous les chiens aboyer? OSTs Interstellar Oblivion https://www.youtube.com/watch?v=l8SfXhG2zxg https://www.youtube.com/watch?v=Bvf5F7UfQ3c Xenharmonic music https://en.wikipedia.org/wiki/Xenharmonic_music See Human hearing. Equal temperment and just intonation: https://www.youtube.com/watch?v=VRlp-OH0OEA See also xenharmonic music.. Neutral evolution of mutational robustness In evolution of ribozymes in vitro, mutations responsible for an increase in fitness are only a small minority of the total number of accepted mutations (see Continuous in vitro evolution of catalytic function.). This fact indicates that, even in adaptive evolution, the majority of point mutations is neutral. This is the basis of Kimura's neutral theory of evolution, see the paper. A neutral network is a collection of mutually neutral genotypes (i.e. producing the same phenotype, whether structure or function), which are connected via single mutational steps; they sometimes form extended networks that permeate large regions of genotype space. A population is mutationally robust (insensitive against mutations) when it inhabits a highly interconnected region of the network so that most mutations lead to the same neutral network, thus leaving the phenotype unchanged. In Neutral evolution of mutational robustness, the authors found analytically, that for a range of population sizes and mutation rates of biological interest, the population's distribution over a neutral network is determined solely by the network's topology. In Information theory, the mutual information between Random variables , and is defined as: where denotes expectation. The mutual information measures the amount of information we obtain about by knowing (see result below). The mutual information between a random variable and itself is equal to its entropy Some results (video): is the Conditional entropy and thus gives you the information about that doesn't give you. Eric Drexler's scheme Oxford as one center to develop this!? Questions: Exactly how do the blocks hold together? The stepper motor surfaces should be designed to have modulatable attractice potentials, I imagine. Predict the stochastic motion of these. How large do surfaces have to be? How fast will they move? Talk at Martin School on Jan 2016 Protein engineering Extended structures: Design of ordered two-dimensional arrays mediated by noncovalent protein-protein interfaces Compact structures: Accurate design of co-assembling multi-component protein nanomaterials Photoswitching http://pubs.rsc.org/en/content/articlelanding/cc/2013/c3cc46045b#!divAbstract . Allows more wavelengths to be used, including red light which is penetrates skin more. http://onlinelibrary.wiley.com/doi/10.1002/anie.201207602/abstract . Can now switch at nanosecond rates. Key Challenge: Coordinated cross-disciplinary development DOE workshop. Oxford, etc. Only 3% of medicines often reach target. Particularly bad with cancer.. We need smart nanosystems. Problem with aggregiation. Use circular DNA wrapped around carbon nanotubes makes it stable in many solution media. Target only cancers with molecule added to DNA. Add fluerescent moleucle for diagnostic. Then theragnostics, both therapeutics and dignosis on spot. Can use the wrapping of DNA for other nanoparticles too! Nanotoxicity and nanowaste are important problems too. A Logic-Gated Nanorobot for Targeted Transport of Molecular Payloads Detect causes of diseases before they are pathological. Nano-needle. Very non-aggresive. 3D-printing with cells. We need extracellular matrix (scaffold) to guide stem cells as they develop (in particular when they stop growing). Stability of DNA nanomachines in cellular environment One of the main challenges in DNA-based nanomedicine! One of the main arguments for approaching nanomedicine with some non-organic materials. Addressing the Instability of DNA Nanostructures in Tissue Culture News: http://www.nanowerk.com/ Foresight Institute Nanosystems and nanoelectronics https://en.wikipedia.org/wiki/List_of_software_for_nanostructures_modeling http://nanohub.org/resources/4540 Engineering programmable molecular systems inspired by biology See some papers on my facebook posts. Unconventional computing using evolution-in-nanomaterio: neural networks meet nanoparticle networks Order custom DNA origami parts! http://ezproxy-prd.bodleian.ox.ac.uk:2076/nnano/journal/v5/n7/full/nnano.2010.147.html Link my slides from Physsoc talk here~ Eric Drexler blog Self-assembly for nanotechnology Software: See also Computational chemistry Bionano Remote control of myosin and kinesin motors using light-activated gearshifting See Convolutional neural network for more; also Formal language Hybrid character-word NNs: http://arxiv.org/pdf/1604.00788v1.pdf Nearest-neighbor methods: To get the prediction Ŷ for a point , use [those observations ( of them) in the training set T, closest in input space to point x]. Remember training set is a set of pairs . Closest often refers to Euclidean distance. It turns out that the effective number of parameters of k-nearest neighbors is , even if technically there is only one parameter, . –> To me it seems more like a method in Nonparametric statistics! Indeed it is (see Wiki). When a flow of liquid occurs through a membrane from a more concentrated solution to a more dilute solution, it is designated as negative osmosis. Compare with (positive) Osmosis See Physical mechanisms of osmosis Many standard theoretical calculations of equilibrium osmotic pressure work under very ideal/simplifying assumptions; like the lattice model in Physical biology of the cell book. Similarly treatments of osmotic flow are often simplified see for instance
The solution-diffusion model: a review. Real life application of osmotic flow need more complete descriptions, which include parameters, which are often measured, mainly the reflection coefficient. Results can often differ (often just quantiatively) from more naive thermodynamic treatments. When you try to find these parameters theoretically is when you get into the hard part, as a microscopic model is needed, whether based on kinetics, or hydrodynamics. This is where the richness of real-life phenomena comes to light. Careful theoretical treatment has found negative reflection coefficients to be possible: MECHANISM OF OSMOTIC FLOW IN POROUS MEMBRANES, Diffusioosmosis of nonelectrolyte solutions in a fibrous medium However, I think they are only possible for non-perfectly-semipermeable membranes! See below. Actually Anderson's paper agrees! Note that in his figure 5, negative reflection coefficient is found only when the solute is smaller than the pore! Configurational effect on the reflection coefficient for rigid solutes in capillary pores In the case of osmosis of electrolytes, there's more studies: Charge-Mosaic Membranes: Enhanced Permeability and Negative Osmosis with a Symmetrical Salt Diffusioosmosis of Electrolyte Solutions in a Fine Capillary Tube Although regular osmosis looks at semi-impermeable membranes, similar diffusio-osmotic effects can be studied for membranes where both solute and solvent can go through the pores: Osmotic Flow through Fully Permeable Nanochannels Drastic alteration of diffusioosmosis due to steric effects Kinetics and thermodynamics across single-file pores: Solute permeability and rectified osmosis (only find negative reflection coefficient (negative diffusion) when the membrane is permeable to solute as well). Experimental measurements of negative osmosis NEGATIVE REFLECTION COEEFICIENTS Entropy-Driven Pumping in Zeolites and Biological Channels (finds negative osmosis, only when the membrane is permeable to both species) Binary Diffusion and Bulk Flow through a Potential‐Energy Profile: A Kinetic Basis for the Thermodynamic Equations of Flow through Membranes (finds negative osmosis, only when the membrane is permeable to both species) Nonequilibrium thermodynamics in biophysics book by Katzir-Katchalsky, Aharon. | Curran, Peter F (in Maths Inst library!) An Experimental Study of Negative Osmosis Anomalous effects during electrolyte osmosis across charged porous membranes Osmosis and reverse osmosis in fine-porous charged diaphragms and membranes OSMOTIC PRESSURE, ROOT PRESSURE, AND EXUDATION Osmotic properties of polyelectrolyte membranes: positive and negative osmosis A neighbourhood space is a weaker notion than a Topological space. It is a Set with a Neighbourhood structure See Csazar 1978 Measures of Complexity of a Graph or Network Quantitative Measures of Network Complexity Correlation of automorphism group size and topological properties with program-size complexity evaluations of graphs and complex networks
They show that: Kolmogorov complexity can capture group-theoretic and topological properties of abstract and empirical networks, ranging from metabolic to social networks, to small synthetic networks. We derive these results via two different Kolmogorov complexity approximation methods applied to the adjacency matrices of the graphs and networks. The methods used are the traditional lossless compression approach to Kolmogorov complexity, and a normalised version of a Block decomposition method (BDM) based on algorithmic probability theory. Complexity is minimal for
empty or complete graphs Kolmogorov Random Graphs and the Incompressibility Method The symmetry is measured by the cardinality of the Graph automorphism group. The following plot from empirical complex networks shows that they are indeed negatively correlated. The graph automorphism is normalized, and NBDM refers to the normalized BDM. Entropy and the Complexity of Graphs Revisited Information Content of Colored Motifs in Complex Networks MMathPhys course mostly about Network Theory Books Networks: An introduction - Newman See books by Barabasi et al., it has nice ones. Other research articles Check wikipedia network science portal and others resources.. ~my problem sheets here~ Oxford course website and blog. Some important classes of networks: Review articles Statistical mechanics of complex networks Complex networks: Structure and dynamics Others Many many good references here on random and evolving networks: http://www.fzu.cz/~slanina/bookmark_files/bkm3-1.html Like air traffic, information flows through neuron 'hubs' in the brain, finds IU study Memory is good for recognizing time sequence data. Memory networks. Apply max-margin. Actual drescription. Paper Recurrent neural nets. Vanishing gradient problem, naively, RNNs don't give you long term memory.. RNNs Long Short-Term Memory (LSTM) was introduced to solve this problem. Computing systems that imitate the working of Neuronal networks, at hardware and/or software level. A basic model is the Spiking neural network. One advantage is that they tend to be more energy-efficient. Numenta IBM TrueNorth. This is direct evidence that an “integrate-and-spike” mechanism has the similar computational capability as the more proven ANNs. The IBM paper however highlighted one major weakness of SNN. That is, training of the TrueNorth system required simulation of back-propagation using another conventional GPU: Training was performed offline on conventional GPUs, using a library of custom training layers built upon functions from the MatConvNet toolbox. Network specication and training complexity using these layers is on par with standard deep learning. See more interesting stuff here: Microglia: A Biologically Plausible Basis for Back-Propagation There however has been no biological evidence of a structural mechanism of “back-propagation” in biological brains. Yoshua Bengio published a paper in 2015 (see: http://arxiv.org/abs/1502.04156 ) “Towards Biologically Plausible Deep Learning”. The investigation attempts to explain a mechanism for back-propagation exists in Spike-Timing-Dependent Plasticity (STDP) of biological neurons. It is however questionable whether neurons are able to learn by themselves without the need of an external feedback pathway that spans multiple layers. There is however an alternative mechanism that recently has been discovered that may be a more convincing argument that is based on a structure that is independent of the brain’s neurons. There is a large class of cells in the Brain called Microglia ( see: https://www.technologyreview.com/s/601137/the-rogue-immune-cells-that-wreck-the-brain ) that are responsible for regulating the neurons and their connectivity. In summary, biological brains have a regulatory mechanism in the form of microglia that are highly dynamic in regulating synapse connectivity and pruning neural growth. The activity is most pronounced during sleep. SNNs have been shown to have inference capabilities equivalent to Convolution Networks. SNNs however have not shown to effectively learn on their own without a ‘back-propagation’ mechanism. This mechanism is most plausibly provided by the microglia. https://www.youtube.com/watch?v=4y43qwS8fl4 Neuronal Avalanches in Neocortical Circuits See Neuronal network, and Neuromorphic computing. "Cells that fire together, wire together."
Kimura's neutral theory of evolution. He proposed that (at least for molecular evolution) most mutations are neutral, meaning that they don't lead to a change in fitness. Because different phenotypes often do have different fitness, the way this comes about is because of the large redundancy in GP maps. When the redundancy is large enough for some phenotype, or there are genetic correlations, so that nearby genotypes (in mutation network) tend to map to the same phenotype, we find large neutral spaces. If Kimura is right, most mutations occurs within these spaces, and are governed by genetic drift, random changes in allele frequencies in finite populations, not governed by natural selection. Genotype space, links represent single-point mutations. It has a hypercube network structure. Neutral spaces or neutral sets are those sets of genotypes that produce the same phenotype. These are important in the ideas of the Arrival of the frequent above which relies on the many-to-one nature of the GPM. It seems to also be related to the Survival of the flattest. Evolution explores neutral space, being exposed to larger number of neighbouring possibilities, before switching to a different, better, phenotype See Monomorphic limit (Wright-Fisher model) Presentation about genetic drift Founder effect is the loss of genetic variation that occurs when a new population is established by a very small number of individuals from a larger population. Neutral evolution of mutational robustness https://en.wikipedia.org/wiki/Molecular_clock The Molecular Clock Hypothesis: Biochemical Evolution, Genetic Differentiation and Systematics Smoothness within ruggedness: The role of neutrality in adaptation Batch normalization: Batch normalization - Accelerating Deep Network Training by Reducing Internal Covariate Shift an explanation Deep Networks with Stochastic Depth Stochastic Depth Networks will Become the New Normal Dropout: Dropout: A Simple Way to Prevent Neural Networks from Overfitting https://en.m.wikipedia.org/wiki/Modular_neural_network STN? DCGAN http://arxiv.org/abs/1511.06434 DRAW http://arxiv.org/abs/1502.04623 Soft/hard attention https://www.google.es/url?sa=t&source=web&rct=j&url=http://arxiv.org/pdf/1502.03044&ved=0ahUKEwi4yof-jPjLAhVC5xoKHcTjDM4QFgggMAA&usg=AFQjCNEs1Yw8fZF9oaqo73cwbHJqKwQHTw CharCNN https://www.google.es/url?sa=t&source=web&rct=j&url=http://arxiv.org/pdf/1508.06615&ved=0ahUKEwjZqaXnk_jLAhWCsxQKHZsuApUQFgglMAM&usg=AFQjCNHk8JQpI98eUtyiluv7d2G9aWRtyA NeuralStyle https://github.com/jcjohnson/neural-style "Take a look at @karpathy's Tweet: https://twitter.com/karpathy/status/709465955223543808?s=09" http://arxiv.org/abs/1604.00790 bidirectional LSTM http://www.computervisionblog.com/2016/06/deep-learning-trends-iclr-2016.html Adversarial networks Resources Notes on Nonequilibrium StatPhys MT2015 Oxford (mostly stochastic processes) Statistical physics -- a second course A Kinetic View of Statistical Physics, P.L. Krapivsky, S. Redner, E. Ben-Naim Stochastic Processes in Physics and Chemistry, N. van Kampen Handbook of stochastic methods - Gardiner Non-equilibrium Statistical Physics is the branch of Statistical physics that deals with systems out of equilibrium, so that averages can change in time (Actually not quite: see Thermodynamic equilibrium). This is much harder to do in full generality, as systems offer much more diversity out of equilibrium, as may be expected. As said in that page, one often has three approaches: Related things (also in the course/exam): See overview in this lecture Large deviation theory.
Einstein formula (1908) Application of large deviation theory to dynamics: Onsager (1931) Nonlinear response theory Nonlinear Response Theory
Nonlinear projection operator method
Zwanzig projection operator Nonequilibrium Equality for Free Energy Differences Jarzynski 1997. Discussed on lecture 3 of Shin-ichi Fluctuation theorem
Fluctuation Theorems Fluctuation theorems, or fluctuation relations, which have been developed over the past 15 years, have resulted in fundamental breakthroughs in our understanding of how irreversibility emerges from reversible dynamics, and have provided new statistical mechanical relationships for free energy changes. They describe the statistical fluctuations in time-averaged properties of many-particle systems such as fluids driven to nonequilibrium states, and provide some of the very few analytical expressions that describe nonequilibrium states. Quantitative predictions on fluctuations in small systems that are monitored over short periods can also be made, and therefore the fluctuation theorems allow thermodynamic concepts to be extended to apply to finite systems. For this reason, fluctuation theorems are anticipated to play an important role in the design of nanotechnological devices and in understanding biological processes. These theorems, their physical significance and results for experimental and model systems are discussed. Shin-ichi calls them identities, and explains them on Lecture 4 Stochastic thermodynamics Focus on Stochastic Thermodynamics
Stochastic thermodynamics has emerged as a framework for describing small driven systems using thermodynamic notions on the level of individual fluctuating trajectories. Topics on the article: Stochastic thermodynamics: A brief introduction See also new advancements mentioned in the article on the new theory on the origin of life (linked in Abiogenesis) On Various Questions in Nonequilibrium Statistical Mechanics Relating to Swarms and Fluid Flow Read Ilya's book on thermodynamics, where he covers the non-equilibrium part. Also his book on Self-organization, and other books on non-equilibrium statistical physics. See if then I can get a more clear derivation of the Allen-Cahn and Cahn-Hilliard equations in Phase transition, describing general forms of diffusion and phase field evolution. See also Complex systems, which are often analysed using ideas from nonequilibrium statistical physics. Fluctuations in nonequilibrium statistical mechanics. One project
is about rare event simulations, non-Markovian extensions of large
deviation theory, and zero-range processes (Harris, Touchette). A
second one is about random packing optimization problems, which have
very different solutions depending on the shape of the objects
(Baule). Stochastic Thermodynamics in Biology Thermodynamic Costs in Implementing Optimal Estimators – Kalman filter
Dynamics of protein synthesis: transcription, translation, and mRNA degradation
Simple models of evolution with selection and genealogies
Universal constraints for biomolecular systems
Stochastic Thermodynamics of Chemical Networks Stochastic approaches in systems biology. See Systems biology Stochastic thermodynamics, fluctuation theorems and molecular machines Video lecture! Udo Seifert - Stochastic thermodynamics 1
lecture series (school on thermalization) More literature on stochastic thermodynamics Martin Z. Bazant
Chemical Kinetics in Nonequilibrium Thermodynamics - Martin Z. Bazant Introduction to stochastic thermodynamics: (prof. dr. M. Esposito) Part1 The stochastic thermodynamics of a rotating Brownian particle in a gradient flow: See stuff in here: MMathPhys Condensed Matter and Astrophysics/Plasma Physics/Physics of Continuous Media Strands Short Syllabi 1. Dynamics of Stochastic Processes (12 lectures)
• Langevin equation and mean-squared displacement versus time, fundamentals of Molecular Dynamics and Stochastic Rotation Dynamics simulation methods
• Probabilistic description of stochastic process, Fokker-Planck equation
• Kramers rate theory, escape probability and first-passage time
• Master equation, equilibrium and detailed-balance, fundamentals of Monte Carlo simulation method, chemical reactions, one-step processes (traffic models), fundamentals of Lattice Boltzmann simulation method
• Diffusion-reaction processes and pattern formation
• Heterogeneous catalysis and the Michaelis-Menten rule in enzymatic reactions
• Rectification of stochastic motion and Brownian ratchets 2. Fluctuations and Response (4 lectures)
• Equilibrium fluctuations, correlation functions
• Density fluctuations, hydrodynamic fluctuations and the long-time tail
• Linear response theory, response function, causality and Kramers-Kronig relations
• Fluctuation-dissipation theorem near equilibrium
• Small-system (stochastic) thermodynamics, Jarzynskii inequality
• Generalised fluctuation-dissipation theorem in nonequilibrium systems https://scholar.google.co.uk/citations?hl=en&user=1V6ZcgMAAAAJ&view_op=list_works&sortby=pubdate Viewpoint: Debut of a hot “fantastic voyager” Information in Biological Systems and the Fluctuation Theorem Stochastic thermodynamics with information reservoirs Non-equilibrium statistical physics (long lecture series) Introduction to macroscopic fluctuation theory by Giovanni Jona Lasinio Foundations of Synergetics II: Chaos and Noise See Biophysics Non-equilibrium statistical mechanics: from a paradigmatic model to biological transport A type of Percolation process that is non-self-averaging, (often? def. of self-averaging?) in the sense that the relative variance of the size of the largest component doesn't vanish in the thermodynamic limit. See also: Achlioptas processes are not always self-averaging – Phase transitions in supercritical explosive percolation – Unstable supercritical discontinuous percolation transitions O. Riordan and L. Warnke showed that k-vertex rule percolation process were continuous, however, it is equally true that certain percolation processes based on picking a fixed number of random vertices are discontinuous. A paradox resolved in this paper, where they show that some processes, while continuous at exactly the transition point, still exhibit infinitely many discontinuous jumps in an arbitrary vicinity of the transition point: a Devil’s staircase. This staircase is in fact stochastic, as the jump points and sizes are stochastic random variables. This stochasticity is present even in the thermodynamic limit, and that is what gives rise to the non-self-averaging property. Continuous dynamical systems are systems of 1st order O.D.Es. Linear dynamical systems (O.D.E.s linear) are easy to analyze, and can be analyzed by looking at the eigenvalues of the Jacobian. Nonlinear continuous dynamical systems are those where the O.D.Es are nonlinear. They offer much richer behavior and thus require more variety in analysis techniques. Locally, however, they can be linearized and analyzed by the same linear Jacobian techniques Autonomous systems are those that don't have explicit time dependence. Attractors are regions of phase space to which points converge if they begin within a given basin of attraction Features: Only in 2+ D: Only in 3+ D: A.k.a. fixed points. They can be classified by their stability and other qualitative features. See Classification of equilibria in 2D. The classification is done by computing the Jacobian matrix at the fixed point, and looking at the eigenvalues and eigenvectors to see how the flow behaves localy: Poincare-Bendixson theorem, trapping regions. Useful to prove existence of limit cycles in 2D, also makes chaos impossible in 2D. Need at least 3D!. Conservative systems: Non-conservative systems: 1-dimensional flows Bifurcations 2-dimensional flows 3-dimensional flows Other concepts: Global bifurcations, bifurcations that are not identified by a change localized close to a limit point or cycle. These occur when there is a qualitative change in the topology of invariant manifolds, or in the topology of basins. Global bifurcations can be accompanied, or even caused by local bifurcations. Poincare section, snapshots of phase space of a dynamical system define a map, so that we can use the theory of nonlinear maps, to analyze, for example, chaotic attractors. Structural stability refers to when a certain qualitative feature (like a type of bifurcation) isn't changed by small perturbation of the equation, by which we mean, addition of small extra terms to the equation. Catastrophe theory studies bifurcations and other qualitative phenomena as control parameters are varied. One can also distinguish: discontinuous (or catastrophic) vs continuous bifurcations. See page 252 of Thompsons book. See also page 257 of that book; one distinguishes safe boundaries, dangerous boundaries. Examples: Josephson junction Revise this Books: Strogatz Thompson and Stewart. Nonlinear dynamics and chaos very good Discrete-time dynamical systems are sometimes called maps. As usual, there are linear maps, which cam be represented by a matrix (plus a constant vector, if the map is affine, instead of just linear). However, most interesting behaviour is observed in nonlinear maps, in which the state at discrete time depends on the state at the previous time via a nonlinear function : where we allow discrete-time dependence of . Autonomous maps, won't have such dependence. Cross-sections of the phase plane of a Continuous dynamical system that are nowhere tangential to a trajectory are called Poincare sections. Trajectories become points in the lower dimensional space of the cross-section, and the dynamical system becomes a discrete map, called the Poincare map. The equivalent of equilibrium points in dynamical systems are fixed points. A fixed point is one that is mapped to itself. Periodic cycles are closed orbits (like limit cycles, or orbits, in dynamical systems). The stability of a fixed a fixed point is determined by its multiplier (which is just the derivative of the function defining the map ) evaluated at the fixed point. A point is stable if , unstable if , and neutrally stable if (at which point a bifurcation occurs). One can use the Jury test to find if the roots of a polynomial are inside the unit circle, which is useful for stability. The stability of a periodic cycle can be found by multiplying the multiplier evaluated at each of the points in the cycle. These numbers are then called the characteristic multipliers or Floquet exponents. One can also have bifurcations of periodic cycles in 2D maps, I think. There are also global bifurcations in periodic maps, some of which are routes that lead to chaos. See Nonlinear dynamical systems and Chaos theory. Local linear stability analysis done by Jacobians, and multipliers are replaced by the Jacobian's eigenvalues which must now be less than one in magnitude for stability. For periodic cycles, one multiples the Jacobians. Another very interesting feature of nonlinear maps, is that many of them exhibit chaos. Can analyze using Perturbation methods. In particular: For example (for the method of averaging), if is the solution, then we require: Van der Pol oscillator. Paper about it's periodic solutions Apply method of multiple scales. As an example consider the van der Pol eq. with the nonlinear term very large (), instead of very small. We introduce a variable s.t. eq. becomes . One also shows that evolves to a state of quasi-equilibrium (very quickly on time scale ) given by a curve on y-x plane. Then it moves along that curve, and one finds that the system must do jumps that are also very fast (on time scale again) periodically. See plot... Well... I'm ommiting many details. See starting from page 11 on notes Lecture notes on nonlinear vibrations Books: Nayfeh Hayashi Nonlinear regression. Like linear regressions but the parameters enter nonlinearly in the function representation, for example as weights in a multi-layer perceptron (MLP), i.e. a ANN, with usually a few layers (shallow learning..). Vowpal Wabbit is good for logistic regression. See Topics in Nonlinear Dynamics by Balakrishnan, and another lectures by him Nonlinear dynamical systems (often abreviated to Nonlinear systems) are Dynamical systems where the O.D.Es or the mapping functions that describe the dynamics are nonlinear. They offer much richer behavior like bifurcations and chaos. Thus, while, locally, they can be linearized and analyzed by the same linear Jacobian techniques, they require more variety in analysis techniques, such as bifurcation theory, Lyanpunov functions, trapping regions, attractors, and chaos theory. Make subsections of these and organize better See Wiggins book, and Strogatz. The theory of discrete systems has many analogies to the theory of continuous systems. Invariant manifolds in dynamical systems Books Strogatz Nonlinear systems dynamics and chaos Deterministic Nonlinear Systems: A Short Course
Vadim S. Anishchenko, Tatyana E. Vadivasova, Galina I. Strelkova (auth.) See books in oxford course website Other lecture notes: http://www.jpoffline.com/physics_docs/y3s5/nlp_lecture_notes.pdf More LNs: http://14.139.172.204/nptel/CSE/Web/108106024/Module5.pdf https://en.wikipedia.org/wiki/Nonparametric_statistics not based on parameterized families of probability distributions See Power laws The normalization C , assuming starts at , is related to the Riemann zeta function, , or the generalized or incomplete zeta function if there is a minimum over which we normalize. Or we could approximate the sum needed to normalize by an integral. Modern synthesis
1. Variation unbiased
2. Space of possible genotypes very vast (even after discarding biologically unviable ones). Evolution is contingent. C.f genetic drift. However,
*Contingency in genotype space does not imply contingency in phenotype space => convergent evolution Protein coding
Hoyle-Salisbury paradox. How can evolution find right proteins in hyperastronomically large space?
Maynard Smith argument
Keefe and Szostak computer experiments Levinthal's paradox Redundancy, correlaton, and funnel-shapped landscapes RNA case study When talking about the word game, word probability incorrectly used For self assembling system the many to one map is from cluster configurations (like genotype) to physically distinct systems (like phenotype). However self assembly explores the phenotype space uniformly and thus shows a bias in the genotype space, and it's a bias against simplicity and symmetry. Algorithm information theory....
For fixed length codes, simple codes have many ways of appearing Fixed code lengths means we have a finite state machine. Algorithmic complexity for finite state machines? Feed fixed input length codes with a short pefix corresponding to the map (the conditon of it being short, in particular much shorter than the inputs, could be the quantitiative condition corresponding to Ard's observation that the map should be "simple"). Then feed this to a Turing Machine (TM). Results will be of varying length. You expect shorter lengths to be more common because:
Input to the TM is approximately like feeding random fixed-length codes (because prefix code is much shorter than input, by assumption, and inputs are random).
If we reverse the TM, hmm no it doesn't work. Well the output will be an input that produces a fixed-length string of bits for the reversed TM, but the distribution in outputs is not random now?
Are there more fixed length strings that will produce shorter codes? Seems unlikely. But im missing he many to one nature of the mapping in this description. Or hmmmm the little prefix code should make this happen somehow? What kind of "prefix codes" can do this? Extracting Hidden Hierarchies in Complex Spatial Networks See Spatial networks Things can often be counted The LMFDB is an extensive database of mathematical objects arising in Number Theory Average bias over 100 samples: 0.74. 74% of the outputs states have most of the inputs. I also have got the code working. Due to the way the libraries I'm using works, it has to be done in 5 steps: generating the fst files, converting them, running the fsts on random inputs, counting number of inputs per output, computing complexities of outputs.
I'm going to write a bash script that calls these in the right order. I'm also using the (modified) Lempel-Ziv complexity measure that you use, that Chico gave me.
At the moment, the random generation of fsts is done in python. I think this is fine, as the bulk of the computation is the "running" and complexity steps, which are c++. However, I found a C++ library that can randomly generate automata (http://regal.univ-mlv.fr/); I haven't yet managed to make it work, but if we do, it's maybe better to use that one. From preliminary runs, I have indeed found the c++ to be much faster, so that I could rather quickly run 10^6 input strings on 50 random 5-states transducers. Of those 11-13 showed clear simplicity bias, the rest showing much smaller bias. This was actually using some python code that is now c++, and should now work even better. Other statistics and complexity measures that we were talking about are yet to be implemented. Over and over again we see a pattern like this: nonlinear -------—linearize & iterate--------—> LINEAR PDE ---------—discretize----------—> ALGEBRA Because of this, computers have brought linear algebra, and numerical linearalgebra, to the forefront of the mathematical sciences. Standard algorithms to solve linear system , i.e. matrix inversion, grow like . To improve this one can: In recent years flop count is less and less important at the high end (i.e. for many processors) – communication is a bigger bottleneck. In standard form could represent system of equations (i.e. vector). Discretize time in steps of size (timestep). Numerical methods (finite difference discretization methods): IVP codes in MATLAB In Chebfun See here for explanation of local truncation error (LTE), used to find order of accuracy (what we call accuracy above, , i.e. error decreases with the square of the time step. Convergence and Stability Theory of convergence of multistep formulas by Dahlquist (1956). Analogs for RK too. Key definitions: consistent: order of accuracy > 1. stable: if for f(t,u) = 0 all the solutions are bounded.. I.e. does error grow or stay bounded.. See here too. convergent: for each fixed as (ignoring rounding errors, from computing..). Dahlquist equivalence theorem: The Adams formulas are consistent and stable, hence convergent Adaptive ODE codes adapt step size and other parameters so that estimated errors (using methods above, like LTE) are smaller than a prescribed value. Chaos and Lyapunov exponents. The Lorenz equations. Sinai billiards is another famous chaotic system. Stability regions regions of space ( is a parameter in the model ode, a=0 corresponds to f=0, as defined above for stability, I guess here we are being more general..) in which solutions remain bounded (this is achieved when characteristic polynomial of the recurrence relation, obtained by the finite difference method, has roots with and any root is simple). See here too. Stiffness. A stiff ODE is one with widely varying time scales. One may need very small because there are modes with (i.e. part of equation which create behaviour corresponding to certain value) outside stability region, even if our solution of interest has effective inside it. This is manifested as our solution changing on a long timescale, but depending on short time-scale terms in equation. Solution: backward-differentiation formulas, or implicit formulas, that include , unlike explicit formulas. These require solving a (generally) nonlinear equation (or a system of equations for PDEs). And this may need to be solved numerically itself often, for example by Newton's method. Aside. We've been discussing IVPs here only. Boundary value problems (BVP) also important. Nonlinear BVP may not have unique solutions! (unlike IVP). Can use chebfun to solve. Now have time and space. Simplest approach is again finite difference discretization. Now discretizing time and space. Numerical stability von Neumann analysis or discrete Fourier analysis. Plug imaginary (oscillatory) exponential into the finite difference formula, and see if some mode blows up (by the amplification factor being greater than 1), or not. Define region of stability thus. PDEs can also be stiff for same reasons as ODEs, and then need to use implicit methods too. A non-linear example is the Kuramoto-Sivashinsky equation. Order of accuracy Defined now for both timestep and space step (see notes). To improve order of accuracy over straightforward Euler method (which is first order in ) we use the trapezoidal rule, which is symmetric in (so that first order errors cancel, and is thus 2nd order in ). In case of heat equation it's known as Crank-Nicolson formula. In the case of heat equation, it's known as the leap frog formula (1928). Reaction-diffusion equations and other stiff PDEs. Can use exponential integrator methods... Solitons Finite differencing in general grids Not necessarily equally-spaced. Principle: 1. At each decide which data, from neighbouring points, to use. 2. Interpolate these data by polynomial of degree . 3. Finite difference approximation to th derivative is: . We don't do these steps explicitly at every step, rather there are slick algorithms to get a formula for general s for arbitrary grids s, and one uses that formula. See B. Fornberg, “Generation of finite difference formulas on arbitrarily spaced grids,” Math. Comput. 51 (1988), 699-706 and B. Fornberg, “Calculation of weights in finite difference formulas”, SIAM Review 40 (1998), 685-691. In multiple space dimension same principles apply, but the system of equations needed to be solved for implicit methods corresponds to a matrix that has a much wider "band" (i.e. set of non-zero diagonals) than for 1 dimension. The structure of this matrix, in the case of discretizing the Laplacian is the famous "discrete or lattice Laplacian" (related to the Graph laplacian). See notes. This Laplacian can often be written as a Kronecker sum. Examples of Differential Equations, with nice explanations: Trefethen et al.'s PDE COFFEE TABLE BOOK Reaction-diffusion equations in Morphogenesis Books: Griffiths & Higham, 2010 - introduction to numerical ODE Iserles, 2009 - includes connection to PDEs LeVeque, 2007 - likewise Hairer, Norsett & Wanner I & II - authoritative; full of fun and historical remarks Ascher & Petzold 1998 - also includes DAEs (differential-algebraic equations,
which combine ODEs and nonlinear eqs) Deuflhard & Bornemann, 2002 Trefethen, old online textbook (http://people.maths.ox.ac.uk/trefethen/pdetext.html)
a.k.a. OOP object = collection of data and functions (methods), that often act on this data. Keywords: Encapsulation. Message-passing metaphor. data abstraction. Modularity. Abstract data types (often implemented as classes). A class is a collection of objects with characteristics in common. A class is represented as a template from which one can instantiate objects. Instantiation is often done by "calling" the class, as if it were a function. In many respects, classes and objects are similar. An object built from a class, is an instance, and it has attributes: methods and fields (variables). These are often called using Methods in Python These are doing operator overloading. In Python, Inheritance A class can inherit attributes from another class, when its defined. Shadowing (a.k.a. overriding an inherited method). OOP is good for modelling systems where you have lots of elements that possibly interact. Factory functions in JavaScript Factories are functions that implement the same functions as classes, but have some advantages. The only disadvantage is that they are a bit slower probably. I think this is related to prototype-oriented programming, JavaScript's version of OOP. https://www.wikiwand.com/en/Octet_rule The octet rule is a chemical rule of thumb that reflects observation that atoms of main-group elements tend to combine in such a way that each atom has eight electrons in its valence shell, giving it the same electronic configuration as a noble gas. The rule is especially applicable to carbon, nitrogen, oxygen, and the halogens, but also to metals such as sodium or magnesium. What things exist? See Metaphysics. Modern operating systems that are designed for multitasking, make use of Concurrent computing ideas, such as Multithreading Interprocess communication (IPC) A Function between Cartesian product of Sets, and another set. Often, the domain is a Cartesian power of a single set. Operations research is a discipline that deals with the application of advanced analytical methods to help make better decisions. Transportation, Assignment, and Transshipment Problems (Basically problems in linear programming, a.k.a linear optimization) From Winston's book on OpRes: http://www.producao.ufrgs.br/arquivos/disciplinas/382_winston_cap_7_transportation.pdf Troika, squaring the circle, Kohn Gallery.webm 不可能モーション2 〜 Impossible Motions 2 〜 I wonder if there could be some formal analogies between how these illusions apparently distort space-time, and how General relativity works. https://en.wikipedia.org/wiki/Mathematical_optimization https://en.wikipedia.org/wiki/Optimization_%28disambiguation%29 (Offline algorithm, you process all the data at each step) Taylor expand to second order (in multi-variate way) and minimize that (i.e. take derivative (gradient)) and set to 0. It performs upper bound minimization. Newton CG (conjugate gradient) algorithms. Expensive thing is computing Hessian. Approximate methods like BFGS, LBFGS. Line search (Online algorithm, you process the data sequentially, by chunks. You need this if you do not access to all of it at the same time, or you have so much data that not all of it fits on your RAM..) You only use a mini-batch (a small sample) of input data at a time, in practice There're theorems that show that this converges well. Downpour – Asynchronous SGD Polyak averaging. Running average over the parameter values at all time steps performed up to now. Momentum. You add inertia to the particle so that the gradient descent is not just velocity = gradient (as it'd be in viscous fluid), but it is acceleration = (viscosity) + gradient. Adagrad: Put more weight on rare features [Duchi et al]. Very useful Rare features (i.e. value along a dimension for example) tend to have more information, i.e., they are able to tell you more about what the output should be. This seems maybe related to AIT. Simplex algorithm Artificial and machine intelligence? Gradient-based Hyperparameter Optimization through Reversible Learning See links here See notes for defs, Asymptotic approximation Big O: as (f could be asymptotic to const*g, or much smaller) Small o: f is strictly much less than g Strict order: f is strictly of order g, i.e. asymptotic to some constant times g. Also: Big theta notation, and Big omega notation. Ordinal analysis The study of Permutation complexity, which we call ordinal analysis, can be
envisioned as a new kind of symbolic dynamics whose basic blocks are ordinal pat-
terns. Trying to explain prevalence of genotype-phenotype map bias Ideas: See Conversation with Chico Camargo on GP map bias - 22/4/2016 See also Abiogenesis and Artificial chemistry Shannon-Fano-Elias code and simplicity bias in GP maps Discussion on finite state complexity and GP map bias Yuri Manin's ideas Physics in the world of dieas: Complexity as energy talk Zipf's law and L. Levin's probability distributions Complexity vs Energy: Theory of Computation and Theoretical Physics More Statistical Physics and Complexity St Anne's college lib has "wild book" (see Krohn–Rhodes theory)! Complexity of cellular automata, applied to life and evolution Osmiophoresis of a spherical shell which is permeable to solvent but impermeable to product particles refers to its development of a nonzero velocity due to osmotic forces that cause radial flows of solvent across the membrane. Movement of a semipermeable vesicle through an osmotic gradient Osmosis is the spontaneous net movement of solvent molecules through a semi-permeable membrane into a region of higher solute concentration, in the direction that tends to equalize the solute concentrations on the two sides. https://en.wikipedia.org/wiki/Osmosis It is often described by a "solvent potential", which is lowered by the addition of solute, and raised by increases in hydrostatic pressure. Thus, the solvent tends to flow from regions of lower to higher solute concentration, and this tendency can be countered by a sufficiently large pressure difference. However, the physical mechanisms that cause this are tricky. See description of mechanisms here: Physical mechanisms of osmosis See also Osmotic forces for more general related effects, caused by interactions of the solute with the boundary Osmotic pressure is defined as the external pressure required to be applied so that there is no net movement of solvent across the membrane. Osmotic pressure is a colligative property, meaning that the osmotic pressure depends on the molar concentration of the solute but not on its identity. See Fluid mechanics, Thermodynamics. See Microhydrodynamics for other possible osmotic effects, which can also cause pressure gradients. See also Biophysics In Reverse osmosis, the process is reversed by applying a pressure greater than the osmotic pressure. This has applications to desalinization, for instance. The theory of the reverse osmosis separation of solutions using fine-porous membranes http://physics.stackexchange.com/questions/212183/physic-explanation-to-osmosis?rq=1 Capillary osmosis through porous partitions and properties of boundary layers of solutions Molecular Understanding of Osmosis in Semipermeable Membranes Forward osmosis: Principles, applications, and recent developments A particular kind of Interfacial force Oxford Artificial Intelligence Society In the making... See AI meetup too Website Code for animated background: http://codepen.io/MarcoGuglielmelli/pen/lLCxy I am currently living in Oxford, and thus a big part of my activities are related to it. Github repo Using Jekyll Domain name registration In Wordpress: https://wordpress.com/domains/manage/oxford3dprintingsociety.com/name-servers/3dprintingoxford.wordpress.com See Measures and metrics for networks There is one potentially undesirable feature of Katz centrality. An important vertex pointing to many vertices makes all those vertices important. The centrality gained by virtue of receiving an edge from a prestigious vertex is diluted by being shared with so many others (think a web directory like Google or Yahoo! pointing to my page. My page is not that central because it's just one of millions). We can solve this by making the centrality derived from neighbours be divided by their out degree: or, in matrix form: where undeterminate values are defined to be . This can also be rearranged to get x and . The result is known as PageRank, the trade name given by Google which uses this measure in their ranking algorithm. Just like with Katz centrality, has to be fixed and it must be less than the maximum eigenvalue of , as if it equal the centralities will blow up, and if it is above the answer turns out to be meaningless. The maximum eigenvalue (at least for an undirected network) is (as can be shown using the Perron-Frobenius theorem, see footnote in page 177 of Newman book, and Meyer - Matrix analysis and applied linear algebra book. The theorem is very useful in stochastic processes on networks in general). Google uses One can see that this measure is mathematically the same as that gotten by the steady state of a random walk in the network, with an added probability related to the ratio of and of "teleporting" to another part of the network, so that one doesn't just get stuck in nodes without out-degree in the case of directed networks or that one doesn't just recover the simple degree centrality for undirected networks. Paleontology is the scientific study of life existent prior to, and sometimes including, the start of the Holocene Epoch roughly 11,700 years before (present) present. Dynamics and growth of Dynamics and growth of bacterial colonies bacterial colonies Some patterns are similar to Diffusion-limited aggregation The Mechanics and Statistics of Active Matter Statistical mechanics and hydrodynamics of bacterial suspensions Geometry and Topology of Turbulence in Active Nematics HYDRODYNAMIC PHENOMENA IN SUSPENSIONS OF SWIMMING MICROORGANISMS Excitable Patterns in Active Nematics Defect dynamics in active nematics ACTIVE MATTER - Brandeis University [Video] 2D Active Nematic Under Fluorescence Microscopy Live Soap: Stability, Order, and Fluctuations in Apolar Active Smectics A partial ordering on a Set is a (binary) Relation on , that is: A set with a partial ordering is called a Partially ordered set (or poset). A Pre-order is a weaker kind of relation For many common examples, the Partial ordering is often interpreted as (or less than or equal). A partially ordered set (or poset) is a Set with a Partial ordering pastes: materials that can be deformed easily (like liquids), but keep their shape after the force is applied (like solids). Two common qualitative characteristics of the structure may be distinguished: disorder, as in most of these materials no specific arrangement can be distinguished, which explains their ability to be deformed at will without losing their mechanical properties; and crowding as the elements making up these materials interact significantly with their neighbours, which explains the solid behaviour of these systems as long as the applied forces are not too large, and from that point of view we are dealing with jammed systems. Pastes typically consist of a suspension of small particles in a background fluid. These particles are crowded, or jammed together like grains of sand on a beach, forming a disordered, glassy or amorphous structure, and giving pastes their solid-like character. Rheology of Soft Glassy Materials Condensed matter: Memories of paste The authors make a remarkable observation: although the sample was completely fluidized by the large shear stress, it developed a 'memory' of the direction in which the stress was applied, and the solid-like paste slowly 'pulled back' on itself in the opposite direction, eventually passing beyond its initial position. A path (sometimes called a 'walk')in a network is a sequence of of nodes such that every pair of nodes in the sequence is connected by an edge in the network. For directed networks an edge must be traversed in direction of edge; in undirected, in either direction. Self avoding paths (a.k.a 'simple paths')don't traverse the same node or edge twice. The length is the number of time one traverses an edge in a path. is only non-zero if there's a path of length 2 from i to j. The total number of such paths is . Similarly, the total number of 3-paths is . In general . Cycles are paths that start and end at same vertex. The number of cycles of length is , where is the ith eigenvalue of . This can be proved by the Jordan decomposition of the matrix if it is diagonalizable (so the nilpotent part is zero, i.e. there is no 1s above the diagonal). Otherwise one can prove this using the Schur decomposition A simple cycle is a self-avoiding cycle. Shortest path between two points, defining the geodesic distance. They are always self-avoiding because any loop could be removed to make the path shorter. By convention we sometimes assign a distance of to unconnected nodes. They are not necessarily unique. The diameter of a graph is the longest geodesic distance between any pair of connected nodes. An Eulerian path is one that traverses each edge in a network exactly once. It is not self-avoiding in general because a node with a degree higher than two will need to be visited more than once. A necessary condition for a graph to have an Eulerian path is that there are zero or two nodes with odd degree, the first case corresponding to beginning and ending the path on the same node, and the second case, beginning and ending on different nodes. A Hamiltonian path is one that visits each node exactly once. It is self-avoiding because traversing an edge more than once will imply traversing a node more than once. The general problem of finding Eulerian or Hamiltonian paths in a graph or proving their non-existence is hard and still actively researched. This was used by Euler to solve the famous Konisberg Bridge problem in 1736. These paths have applications in computer science in: job-sequencing, "garbage collection", and parallel programming. The Markov property of most stochastic processes means that one can naturally construct a Path integral description. This can be used to draw parallels between stochastic processes and quantum mechanics. From Langevin equation to path integrals We begin with the general Langevin equation with no inertial term, but a deterministic force. We also assume the noise term is Gaussian white nosie Ito/Stratonovitch dilemma and Multiplicative noise See also State-dependent diffusion: Thermodynamic consistency and its path integral formulation https://en.wikipedia.org/wiki/Pathology The study of the abnormal (an often restricted to detrimental) function of a biological organism. In medicine, a physiologic state is one occurring from normal body function, rather than pathologically, which is centered on the abnormalities that occur in animal diseases, including humans. See section 7.12.2 of Newman book. It is the number of common neighbours minus the expected number of common neighbours if edges were random (), normalized in a certain way. It is also the covariance between two rows of adjacency matrix divided by the product of their standard deviations: Generally, percolation refers to qualitative changes in connectivity in systems (specially large ones) as its components are added or removed. In particular, percolation most often refers to the case where a system goes from being "mostly disconnected" to "mostly connected", in some sense. A more general mathematical model inspired by percolation and the Potts model is the Random-cluster model. Percolation theory, from the perspective of Network theory describes the behavior of connected clusters in a network (often modelled as a random graph), as some substructures in the network are added or removed. The most common types are random site and bond percolation, where one removes either nodes or edges with a uniform probability, known as the occupation probability. However, there are other types (see below). Again, from the perspective of networks, the transition from the system being "disconnected" to "connected", is most often made precise by the appearance of a giant connected component. See below. Often, the theory of percolation is concerned with the clustering properties of identical objects which are randomly and uniformly distributed through space with a given occupation probability. However, these uniformity assumptions may be relaxed in other types of percolation. Keywords: Network science, Complex systems. Newman's book, and Mason and Gleeson tutorial have good reviews. See more at References for percolation Mathematical theory of percolation, with several important results, and discoveries. A phase transition occurs between a phase without a giant connected component and a phase with one. A giant connected component, or GCC, is a connected component that contains a finite fraction of the nodes as the network size , i.e. it has an "extensive" scaling. The transition occurs at a critical value of the occupation probability, known as the percolation threshold. Main types: Applications in topography (study of landscapes) has been found, in particular relating to: The Bethe lattice is defined as a graph of infinite points each connected to z neighbors (the coordination number) such that no closed loops exist in the geometry They are related to Cayley trees. Several results exist for Percolation on these lattices. For instance, their Percolation threshold is , for any . See also the chapter on this on the phase transitions book by Sole. Percolation on hypercubic lattices, which can be represented as , where is the dimension of the lattice, and is the set of integers of course. Some mathematical results exist for Percolation thresholds, and Continuity of percolation phase transition. In particular, it is known that the percolation threshold at dimension is greater than or equal that of dimension for percolation on . See Percolation theory, Random graph If we let be the probability that a randomly chosen vertex in the graph does not belong to the giant component, then See this chapter for random graphs with general degree distributions, and this chapter for percolations Percolation is the simplest fundamental model in statistical mechanics that exhibits phase transitions signaled by the emergence of a giant connected component (or GCC, it is a connected component that contains a finite fraction of the nodes as the network size , i.e. it has an "extensive" scaling, in the language of Statistical physics). The parameter that controls the existence of a GCC is the occupation probability, (or the "attach probability" ), the critical value at which the transition happens is called the percolation threshold In particular, the transition is often a continuous transition (2nd oder) with a critical point. Behaviour at this point is thus an example of Critical phenomena, and at this point the system is self-similar (see Fractals), and as a consequence, many quantities follow Power laws. See section 12.2, and exercise 12.12, as well as exercise 2.13 of this book. Percolation threshold For random site percolation on a configuration model graph: where is the generating function of the excess degree distribution. See Newman's book. Note that even if there is a GCC, its size may be small, so a full understanding of the network's resilience should include the dependence of the size of the GCC with . The mathematical theory of Percolation. Cluster: a connected component of the occupied subgraph (the graph obtained after removing edges in the percolation process). Probability that there exists an infinite cluster. Probability that there exists a giant cluster (or giant component, or giant connected component (GCC)), defined as a cluster with size (number of nodes) or order , as ( is the size of the whole network). A related, but different quantity is the probability that a node belongs to a giant cluster, . Often it's easier to work with , the probability that a node is not connected to the GCC. Another property of interesting is the Distribution of sizes for the small clusters in percolation models. A related quantity is the the mean cluster size. The two-point correlation function, is defined as the probability that if one point is in a finite cluster then another point a distance away is in the same cluster. This function typically has an exponential decay , . is then the correlation length, or connectedness length. Note that the correlation length can also be defined in some other ways that measure the characteristic size of clusters, in particular one can use the radius of gyration to define it. See here A model which is particularly tractable analytically. There are some exact results for some models, in 2D for the square, triangular, honeycomb and related lattices, but not for many others, like site percolation on the square and honeycomb lattices, and bond percolation on the kagomé lattice. The continuum limit, at the critical point, it is often a Conformal field theory, as percolation models at the critical point are found to have conformal symmetry. A relatively new method to describe the continuum limit of the critical lattice models is Schramm–Loewner evolution There are some results on the number of possible infinite clusters which can coexist The values of percolation thresholds are not universal and generally depend on the structure of the lattice and dimensionality, and are believed to achieve their mean-field values only in the limit of infinite dimension (Some Cluster Size and Percolation Problems). Finding rigorous proofs of exact thresholds and bounds has also been an enduring area of research for mathematicians (The critical probability of bond percolation on the square lattice equals 1/2, A bond percolation critical probability determination based on the star-triangle transformation, Percolation - Grimmett). Exact thresholds (for bond percolation) in 2D for the square, triangular, honeycomb and related lattices were found using the star-triangle transformation (Some Exact Critical Percolation Probabilities for Bond and Site Problems in Two Dimensions). It has been shown in Exact bond percolation thresholds in two dimensions that thresholds can be found for any lattice that can be represented as a self-dual 3-hypergraph (that is, decomposed into triangles that form a self- dual arrangement). It is also shown in [G.R. Grimmett, I. Manolescu, Probab. Theory Related Fields] that thresholds can be found for any lattice that can be represented geometrically as an isoradial graph, yielding a broad new class of exact thresholds and providing a proof (The critical manifolds of inhomogeneous bond percolation on bow-tie and checkerboard lattices of Wu’s 1979 conjecture (Critical point of planar Potts models) for the threshold of the checkerboard lattice. However, the exact value of thresholds for many systems of long interest (such as site percolation on the square and honeycomb lattices, and bond percolation on the kagomé lattice) are still missing (Recent advances and open challenges in percolation). There exist also bounds on the percolation thresholds for infinite connected graph with maximum finite vertex degree. See Grimmett's book. The percolation threshold for bond percolation is less than or equal to that of site percolation. See Permutation complexity in dynamical systems Permutation entropy was introduced in 2002 by C. Bandt and B. Pompe as a
measure of complexity in time series. In a nutshell, permutation entropy replaces
the probabilities of length-L symbol blocks in the definition of the Shannon entropy by the probabilities of length-L Ordinal patterns. See SimpleMind mindmap and notes and problem sets in LectureNotes See also lectures on YB from Bender (at PI) Perturbation methods explore the existence of a small or large parameter to derive systematically a precise approximation. More art than science, building experience is valuable. There are two methods for obtaining precise approximations: numerical methods and analytical (asymptotic) methods. These are not in competition but complemen teach other. Perturbation methods work when some parameter is large or small. Numerical methods work best when all parameters are order one. Agreement between the two methods is reassuring when doing research. Perturbation methods often give more physical insight. See reading list there (as discussed in Bender's book Part 2. Is this the same as regular perturbation methods, as discussed in Hinch's book? I think so). Mostly for problems with regions of very different speed of change. These are singular perturbation problems, often when the small parameter is multiplying the highest derivative. Then the problem is of lower order, and will in general not be able to satisfy all the boundary conditions of the original problem I wonder if there are analogous to these methods to algebraic equations. Maybe through the Perturbation methods for difference equations, which are closer to algebraic equations. These are described in Bender's book Books: Hinch Bender and Orzsag .. Faster if expansion sequence is unknown (i.e. we don't know it it's a power series or a log series for instance); slower, if the expansion sequence is known. Pose (guess) expansion. For instance a power series in small parameter, : as and substitute in algebraic equation, and equate terms of equal order because asymptotic expansions (using a fixed set of functions of ) are unique. Easier than the iterative method, specially when working to higher orders, but must assume form of expansion. When limit problem () differs in an important way from the limit ). Main method: Regularization method: Scale variables so that the problem becomes regular. When power expansion fails (one of the coefficients seems to need to be ..), an expansion in non-integral powers may be necessary. This happens for example when the roots at limit problem () are a double root. As he says from example given in the notes, we could have guessed that an order change in would be required to produce and order change in a function at its minimum. Yeah if we are perturbing the parabola by an order , then the new root would be the same as perturbing in such a way as to get the order change in the original parabola. At the minimum of the parabola, from Taylor expanding, we see we need a larger change in to get the change in the function. We first pose the general expansion: , Substitute into the algebraic equation, and look for dominant balances in the result. This will involve looking for the largest terms with and without Once we have the first, term, we add a term to the expansion: , And we repeat this process Again, the iterative method is very useful when the expansion sequence is not known, and can be faster than the above method involving unknown expansion functions, . Normally appears in transcendental equations. Use iterative method as expansion method is hard to guess. In his example, "over this range the term is slowly varying while is rapidly varying. This suggests rewriting the equation as . I think this is so that we control/determine the faster changing term. Qualitative picture ...See sec 2.2.2 on Soft Condensed Matter by Richard Jones, and also beginning chapter of Principles of condensed matter physics As we increase temperature, the average energy per particle, , increases (see Equilibrium statistical physics). Because the potential between molecules is generally bounded above (for example the attractive part can have the form of or for large , so that the maximum potential energy is 0), as we increase , we soon reach a point where we must increase the kinetic energy, as the potential energy becomes saturated (i.e. the molecules have dissociated, or we have broken the bond). Therefore, as we increase temperature, we find that we go to phases where the kinetic energy is more and more dominant, often from solid to liquid to gas, though, for low enough pressure, the liquid phase is skipped. Phase diagrams, 2D projections of surface in a 3D space of temperature, pressure, and volume. Critical point: point at which gas-liquid transition changes from being continuous to discontinuous. Triple point: Point of coexistence between three phases. Order parameter: quantity that distinguishes different phases, often associated with some kind of "order", and is often in disordered phase. There are two main types: These equations describe the evolution of phase fields: the fields of the {space-time varying order parameter}. They thus belong to the so-called phase-field method used in Materials science, for example. Mixture theory or the theory of interacting continua also uses the above equations for describing multi-phase systems. See ON THE DEVELOPMENT AND GENERALIZATIONS OF ALLEN-CAHN AND STEFAN EQUATIONS WITHIN A THERMODYNAMIC FRAMEWORK See also Soft matter physics notes. Though, I would like to see a more rigorous derivation of these equations, based on non-equilibrium thermodynamics. The derivation are rigorous, they just use Constitutive equations that are mostly just assumed, instead of derived! Describing phase transitions in terms of a free energy, which is a function of the order parameter, and depends on parameters (such as temperature). As one varies the parameters, the free-energy minima change location, and appear/disappear at phase transitions. Ginzburg-Landau theory in Statistical field theory: write down most general free energy that is consistent with known symmetries of the order parameter. Assume it can be written in power series and stop hen additional terms don't change the behaviour of interest. Symmetries. Symmetry breaking. Correlation functions, etc. Critical exponents (describe behavior of thermodynamic functions near critical point). Order of transition "One does not simply define Philosophy" ~ Me Philosophy is this. It is the study of the Cosmos, from the anthropocentric perspective, of our Knowledge of the Cosmos. This is, in a fundamental sense, all we have, since the physical, objective perspective, of the Cosmos ultimately derives from our Knowledge of it. From this perspective, Cosmos gets equalled to our Knowledge of it. Philosophy concerns itself with the study of the Cosmos from this perspective, which basically by definition, encompasses everything else here, everything one ever thinks, and is conscious of. I call this the observer perspective, and I think it's the most fundamental. This is in contrast to the god perspective often taken in Science, where we imagine an objective reality separate from our minds (described in Cosmography and Cosmology). This perspective has been so useful and fruitful, I consider this physical objective world to be true also, even if our only access to it is by our limited senses and mental models of it (as asserted by the observer perspective). Most often I work with the god perspective, as in Science. However, when dealing with complex philosophical questions, I have to switch to the more fundamental observer perspective. See also Metaphysics for another description of the above, as a view of the nature of Existence. Portal:Contents/Philosophy and thinking Stanford encyclopedia of philosophy My Metaphysics: Observer/god perspective, or better name may be Mind/Physical reality. My Epistemology: Principle of Inclusiveness My Ethics: Utilitarianism up to the point you can. Then virtue/Emotion/Aesthetics. In particular, see discussion in Emotion. My Logic. Don't know enough, but I think Mathematical logic may be the best description. My Politics (ideas only): A weak dynamic social democracy, combined with a robust weighted direct democracy and a cyber-government. One of the central ideas of this philosophy that combines observer perspective with the existence of a physical world, is the division of everything into whether information flows from observer to physical world, or vice versa. The former, I call Art; the latter, I call Science. All other sections are effectively described by these two aspects of the observer condition, which I holistically call the Conversation with Nature. Here is a very interesting alternative to my conceptual framework: Krebs cycle of creativity Inclusiveness Principle to arrive at Truth. See The emergent multiverse, and papers by David Wallace http://physics.stackexchange.com/questions/27190/experimental-test-of-the-non-statisticality-theorem Duhem-Quine thesis: You can't test a hypothesis in isolation, but always in conjunction with ancillary assumptions. Karl Popper Beyond Descartes and Newton: Recovering life and humanity Proposed clasification A phoretic mechanism of colloids, is any mechanism/effect that causes colloidal particles to move in a way that is partially deterministic (unlike Diffusion), due to the gradient of some physical quantity (This seems to be the working definition, judging from what I've read). These may also called transport mechanisms. These are important in Active matter, in Biophysics, and Nanotechnology. In particular phoretic effects can make a colloidal particle self-propelling. A large class of mechanism for colloid transport are due to interfacial forces, due to non-trivial Microhydrodynamics, Chemical reactions, or other effects. See Colloid Transport by Interfacial Forces. See also the more recent Manipulation of Colloids by Osmotic Forces Generic theory of colloidal transport Thermal non-equilibrium transport in colloids Phoretic mechanisms for active colloids. Phoretic mechanisms for living organisms (for instance living active colloids like Cells) are called Taxis. See Chemotaxis for a prominent example. Actually, chemotaxis is often applied to the phoretic mechanisms of active colloids (when they are originated by a gradient in a chemical concentration). More specifically, chemotaxis may be used to refer attraction to higher chemical concentration, while anti-chemotaxis would refers to repulsion from it. Photosynthesis: Crash Course Biology #8 Light dependent reactions Basically Cellular respiration in reverse. Water + Carbon dioxide + sunlight Light independent reactions Calvin cycle See Networks miniproject (inoverleaf), pdf here: Spatial network optimization with a model of the physarum polycephalum Analytical properties of physarum solver http://epubs.siam.org/doi/pdf/10.1137/1.9781611974331.ch131 http://arxiv.org/pdf/1101.5249v1.pdf Physarum Can Compute Shortest Paths PHYSARUM CAN COMPUTE SHORTEST PATHS: A SHORT PROOF http://arxiv.org/abs/1106.0423 http://arxiv.org/abs/1601.02712 On a Natural Dynamics for Linear Programming Tero's model A mathematical model for adaptive transport network in path findingby true slime mold A Mathematical Study ofPhysarum polycephalum http://www.ncbi.nlm.nih.gov/pubmed/18415133 Other models An Improved Physarum polycephalum Algorithm for the Shortest Path Problem Physarum Learner: A bio-inspired way of learning structure from data An adaptive amoeba algorithm for constrained shortest paths Dynamics Experiments Plasmodial vein networks of the slime mold Physarum polycephalum form regular graphs Are motorways rational from slime mould's point of view? Physical mechanisms of Osmosis Based on Chemical potentials, Solution (Chemistry) The solution-diffusion model: a review MECHANISM OF OSMOTIC FLOW IN POROUS MEMBRANES The standard chemical potential explanation still holds as part of the mechanism. See here. The energy comes from the expansion of the solute (which works like an ideal gas), just like in quasistatic adiabatic expansion. However, the boundary layer given by a Diffusio-osmotic effect, enhances the checmical potential difference at the pore increasing the osmotic pressure. The extra work done in the process, I think, ultimately comes from the fact that the potential energy near the wall is lowered as the solute concentration decreases during the process. When membrane is semi-permeable (as in Osmosis proper), then I think that the main effect would be an excluded volume effect (this appears to be indeed the case, at least for purely semi-impermeable membranes see Negative osmosis), giving rise to an effective repulsive potential, like that in Nelson's Biological physics book, or those that appear in Diffusio-osmosis, or Diffusiophoresis. Molecular mechanisms of osmosis
Mechanism of osmosis OSMOSIS: A MACROSCOPIC PHENOMENON, A MICROSCOPIC VIEW Osmosis is not driven by water dilution, here too. See Nelson's Biological physics book for more details The mechanism is based on the wall repelling the solute molecules. See analysis here. Alternative mechanisms: Osmosis, colligative properties, entropy, free energy and the chemical potential
Osmosis and thermodynamics explained by solute blocking
http://www.circle4.com/biophysics/chapters/BioPhysCh05.pdf Brownian motion, hydrodynamics, and the osmotic pressure Molecular Understanding of Osmosis in Semipermeable Membranes See also Negative osmosis for more resources. Physics (from Ancient Greek: φυσική (ἐπιστήμη) phusikḗ (epistḗmē) "knowledge of nature", from φύσις phúsis "nature") is the natural science that involves the study of matter and its motion through space and time, along with related concepts such as energy and force. One of the most fundamental scientific disciplines, the main goal of physics is to understand how the universe behaves. (wiki). International Centre for Theoretical Sciences hyperphysics, etc. Some physics books: http://www.fisica.net/ebooks/ See DB\Cosmos, etc.... Should organize this. See SimpleMind mindmap. See also Bulk matter. Studies the function of organisms. Goes together with Anatomy, which studies the structure of organisms. One often restricts physiology to refer to the "normal" functioning of organisms, in contrast with Pathology https://en.wikipedia.org/wiki/Physiology See Human physiology see vids here Places Portal:Contents/Geography and places Vivekananda Rock & Valluvar Statue, southernmost peak of India, where two seas and an ocean meet
A planar network (or graph) is one that can be drawn on a plane without having any edges cross. For these graphs we can define the Dual Graph, with vertices being faces (regions completely enclosed by edges), and edges being among faces that share an edge of the original graph. This new graph is also planar Dual graphs were used to prove the four-color theorem by Appel and Haken, which translated to graphs is stated in terms of the chromatic number, the number of colors required to color the vertices of a graph in such a way that no two vertices connected by an edge have the same color. Kuratowski's theorem... As of yet, there is no popular measure of degree of planarity (i.e. how planar a graph is?) A planetary system is a set of gravitationally bound non-stellar objects in orbit around a star or star system Evolved more than 500 million years ago, as Lycophites. These plants where so numerous that they have resulted in many coal beds from this period, now called Carboniferous Plant Cells: Crash Course Biology #6 See Plant They have a cell wall made of polysaccharides cellulose, hemicellulose and pectin; sometimes also lignin or cutin Plastic is a generic term used in the case of polymeric material that may contain other substances
to improve performance and/or reduce costs. Note 1: The use of this term instead of polymer is a source of confusion and thus is
not recommended. Note 2: This term is used in polymer engineering for materials often compounded that
can be processed by flow. ... Some potentially good ideas for political systems Social Futurism A weak dynamic social democracy, combined with a robust weighted direct democracy and a cyber-government. From each according to his wants, to each according to his wants, from the machines whatever else is needed. https://www.youtube.com/watch?v=F5uqZGA06vE Transhumanist declaration 1998 http://ieet.org/index.php/IEET/more/twyman20140416 http://wavism.net/ Bioneering Sociocyberneering http://www.thevenusproject.com/ http://www.thezeitgeistmovement.com/ Polymatharchy http://en.wikipedia.org/wiki/Idea_of_Progress A more in depth summary: 1. weak (not many powers given to the "admin bods" as Rusell Brand calls them (https://www.youtube.com/watch?v=3YR4CseY9pk&t=4m56s). Also see: https://www.youtube.com/watch?v=gy0R56sZ0ts to understand this better.) 2. dynamic democracy (citizens can vote to change leaders at any time, Details: http://www.ted.com/.../a_dynamic_democracy_where_lea.html) 3. social (ok, this is a broad term. I will give it here the definition of focusing on social causes, that is on the betterment of all people's lives. Include ideas like the declaration of human rights, basic income and income taxes here.) 4. strong direct democracy (refers to most decisions being taken by all the citizens by some voting/discussion scheme, probably striving for >50% majority. A note here is that less restrictions would generally be put on corrective decisions, rather than initiative ones, because of the biggest imperative of avoiding harm than of enhancing some quality. This idea is called corrective democracy: http://www.fee.org/the.../detail/can-we-correct-democracy...) 5. weighted (means that people who have certain qualifications or have acquired certain merits have a bigger saying in issues) 6. cyber-government (this refers to both the technologies used to implement a lot of the above, and to the general idea of creating an advanced nervous system for society (see https://www.youtube.com/watch?v=5zn8MRKOskw&t=78m18s for example), from which everyone can get informed and inform others, and which can on itself help on arriving at decisions (by different possible kinds of AI)) –experimental politics would also be a thing, as more people start viewing politics as the "social tech" it is. Freedom will thus be enhanced by voluntarism in things like startup cities: http://startupcities.org/hacking-law-and-governance-with.../ On short, my view is that technology allows governance to really be put on the hands of the citizens, but this must be done in an intelligent and supervised way.
A polymer is a molecule composed of a small molecular unit repeating in a chain; usually units. The chain may have complicated topology, like branches, or cross-links. Links can also be made between different polymers (of different chemical composition for instance). These all determine the polymer architecture. Polymer chemistry Example of polymer: https://en.wikipedia.org/wiki/Polystyrene Main architectures: More specific examples of architectures: Interestingly, when one closes a linear-chain polymer into a loop, the viscosity drops dramatically. Polymer physics deals with the physical properties of Polymers. A polymer is a molecule composed of a small molecular unit repeating in a chain; usually units Model polymer chain like random walk. Can include effect of short-range interactions A variant is the Gaussian chain Flory-Huggins theory Chemical potential and osmotic pressure Phase separation Books and resources http://cbp.tnw.utwente.nl/PolymeerDictaat/ Introduction to Polymer Physics - M. Doi The Theory of Polymer Dynamics - M. Doi & S.F. Edwards People S.F. Edwards P.G. de Gennes Doi Viscoelastic fluids Reptation cross-linking rubbers (See Arrival of the frequent for context) If , the population naturally spreads over different genotypes, a regime called the polymorphic limit. See Polymorphic limit (Wright-Fisher model) tiddler for more. To model neutral exploration, we let , where is a Kronecker delta, so that only has some fitness, and all other phenotypes have fitness, and so, even if a mutation produces them, no offspring can inherit from them. At every generation, all offspring inherits from only, and thus the population can only spread by mutations over a single generation jump, and it is most likely to stay mostly within , if is large enough. We should note that equations, like Eq.3 would be the same, even though we assumed that all the individuals are in , because, as , all the selection weight is in , which produces the same results. More precisely, in the expression only (the number of individuals in ) elements are non- in the sum and so in the mean-field approx (where we assume is constant) the from the sum cancels the from the denominator, leaving a on the top. In the mean-field approximation the expected number of individuals with phenotype produced per generation is now independent of time, and given by Eq. 3. (we thus simply call ), under the corresponding assumptions, because even if not all of the population are in , the assumption of fitness, we've made gives selective weight only to those in (see Wright-Fisher model). As we said above, the number of individuals with genotype (p-type) will follow a binomial distribution, with probability of success (getting p-type offspring), and number of trials , and therefore the probability to get at least one such individual is: After generations, we have run the Bernoulli trial times, and thus the number of p-type individuals we have gotten, summed over all the generation also follows a Binomial distribution, but with samples, and same probability. Thus Thus, the time when {{the probability of having discovered a p-type individual (produced a p-type offspring)} is } is found by: Eq. 4 Where we used Eq. 3 in Arrival of the frequent. Mathematical population genetics See Evolution See Wright-Fisher model, Arrival of the frequent, Monomorphic limit (Wright-Fisher model), Polymorphic limit (Wright-Fisher model). Second Bangalore School on Population Genetics and Evolution School and Discussion Meeting on Population Genetics and Evolution (video lectures) Some terms: gene, genotype, allele, (gene) locus, haploid, diploid, homozygote, heterozygote, heterozygosity, monoecious, dioecious, polymorphism,link age, recombination. https://en.wikipedia.org/wiki/Haplodiploidy Fixation time Coalescent Computational biology - An evolutionary approach https://en.wikipedia.org/wiki/Neutral_theory_of_molecular_evolution Some mathematical models from population genetics course Mathematical Population Genetics lecture notes Theoretical evolutionary genetics - Felsenstein (book), pdf Probability Models for DNA Sequence Evolution Population Genetics V: Neutral Theory Wright-Fisher model with some stuff on the coalescent Random Genetic Drift & Gene Fixation Some mathematical models from population genetics book Genetic Drift and Effective Population Size Heterozygosity and the Wright-Fisher model (stackexchange) Quantitative genomics (MIT) ppt STOCHASTIC MODELS FOR GENETIC EVOLUTION Diffusion Process Models in Mathematical Genetics Short course on statistical population genetics ON THE PROBABILITY OF FIXATION OF MUTANT GENES IN A POPULATION’ THE AVERAGE NUMBER OF GENERATIONS UNTIL FIXATION OF A MUTANT GENE IN A FINITE POPULATION' Notes on population genetics and evolution: “Cheat sheet” for review Intuitive explanation of fixation time https://en.wikipedia.org/wiki/Porous_medium A porous medium or a porous material is a material containing pores (voids). The skeletal portion of the material is often called the "matrix" or "frame" Material porosity and permeability A porous material most often refers to porous solids, i.e. porous materials where the matrix is a solid. If the porosity of a porous solid is high enough, it is also falls under the category of Foams, and many of these are very flexible materials. Solid-gas Dispersed media do form materials with pores, but they are different from porous solids in that the location of these pores can change as the material is strained or disturbed in some way. See also Scale-free networks A power law distribution for has the form: where is the exponent, Lorenz curves for power law distributions Zipf, Power-laws, and Pareto - a ranking tutorial http://www.necsi.edu/guide/concepts/powerlaw.html Similarity of Symbol Frequency Distributions with Heavy Tails Top-heavy distributions Power laws often mean that rare events are more likely that one could have thought, because the tail "dies off" more slowly than in, say exponential distributions, like Gaussians Power Law Distributions, 1/f Noise, Long-Memory Time Series Similarity of Symbol Frequency Distributions with Heavy Tails The Power Spectral Density (PSD) and the Autocorrelation. A periodogram is a sample estimator for the PSD. See here too.. Power spectral density
power law PSD more on power laws and 1/f noise Fourier Transform--Exponential Function Wiener–Khinchin theorem
http://personal.egr.uri.edu/chelidz/courses/mce567/handouts/psdtheory.pdf https://en.wikipedia.org/wiki/Power-line_communication Main current applications in narrow-band networking: A pre-order on a Set is a (binary) Relation on , that is reflexive and transitive. aka prefix-free, or instantaneous code A string is a prefix of another string if their first symbols coincide, for some . A prefix code is a Variable-length code where no codeword is a prefix of another codeword. (IC 2.6) Prefix codes - remarks and what's next Any prefix code is uniquely decodable A prefix code can be represented as a search tree, and is a nice way to think about prefix codes. The above definition may called left-prefix. There is also the notion of right-prefix. See here Example to see why prefix codes are faster (in the sense of computational complexity) to decode than other uniquely decodable codes. Prefix codes are decodable in linear time Measure-theoretical dynamical system where the measure is a Probability measure I invented this term, not sure if it already exists. https://en.wikipedia.org/wiki/Probability_space Probability Theory Wiki article. Mathematica foundations of probability Probability, Mathematical Statistics, Stochastic Processes Based on Measure theory Basic results in probability theory Cumulative distribution function Probabilistic method (see book) A change in the properties of something through time. See also Activity The product topology on a Cartesian product of Topological spaces (, , where is some index set) is defined to be the union of all sets of the form where is -open. Where we are assuming here is finite. This definition is not correct when is infinite, and the definition using cylinder sets below must be used. Note that the definitions are different because the basis is constructed from finite intersections of the open cylinders. However, some elements corresponding to infinite Cartesian products of the form can't be realized from finite intersections of open cylinders which all have the form , where is a finite subset of . This comes about for example, in infinite Sequence spaces. It can also be constructed using Filter subbases and Filter bases (that generate the open sets of the topology) Note the elements forming the subbase are part of the final topology. They have the form described above if we rememeber that the full set is always open. The sets forming the subbase are known as open cylinders, while those forming the basis are known as Cylinder sets. Another equivalent way of defining the product topology is as the 'smallest' topology such that the projection functions , are Continuous functions. A smaller subbase is given by the Cylinder sets Universal Coating for Programmable Matter DNA Computing and Molecular Programming: 21st International Conference, DNA ... Distributed Intelligent MEMS: Progresses and Perspectives Scalable Simulation of Wireless Electro-Magnetic Nanonetworks Design, Fabrication and Characterization of an Autonomous, Sub-millimeter Scale Modular Robot A Markov Chain Algorithm for Compression in Self-Organizing Particle Systems More specifically, computer programming Abstractions Lecture 1 - Programming Paradigms (Stanford) Implementation Search syntax constructs of common languages: https://syntaxdb.com/ Principles of Programming Languages Structure and Interpretation of Computer Programs Books: Clean code, the art of computer programming Imperative: give instructions to change the state of the program Declarative: just write statements (assertions) of what things do, what functions do they perform. Then the program can take inptus and give outputs by passing inputs through the various nested functions (Functional programming). Visual programming languages Nice example: https://vvvv.org/ Most programming languages are context-free. http://stackoverflow.com/questions/898489/what-programming-languages-are-context-free. See Theory of computation Assembly (programming language) Other languages Go, Lisp, Clojure, Projects, ideas, action, is about new ideas, the brink of the known, the edge of the philosophical Cosmos. Interdisciplinary, antidiscilpinary, etc. New emergent ideas. Things that don't fit Also: thinking what to do, and doing. Lifes of important/influential people: http://fundersandfounders.com/ Facebook, twitter news feed... One of the most studied model organisms is the Escherichia coli Gram staining is a method of staining used to differentiate bacterial species into two large groups (gram-positive and gram-negative), by detecting peptidoglycan, which is present in a thick layer in gram-positive bacteria Actinobacteria is a phylum of Gram-positive bacteria with high guanine and cytosine content in their DNA Streptomyces is the largest genus of Actinobacteria Streptomyces hygroscopicus produces Sirolimus, also known as rapamycin which is an inhibitor of the Kinase enzyme Mechanistic target of rapamycin Educational portal of the awesome Protein databank : http://pdb101.rcsb.org/ https://en.wikipedia.org/wiki/Public_health "the science and art of preventing disease, prolonging life and promoting health through organized efforts and informed choices of society, organizations, public and private, communities and individuals." Libraries Graphics:
http://www.pythonware.com/products/pil/
https://pypi.python.org/pypi/colour/ Maths:
numpy
sympy
https://www.continuum.io/why-anaconda Others:
http://blog.rtwilson.com/my-top-5-new-python-modules-of-2015/ Oxford, Fabian Essler C6 physics notes Advances in Graphene, Majorana fermions, Quantum computation New questions in quantum field theory from condensed matter theory Ideal Fermi gas Weakly interacting bose gas From Hamiltonian can derive Gross-Pitaevskii equation http://www.nii.ac.jp/qis/first-quantum/forStudents/lecture/pdf/qis385/QIS385_chap4.pdf Bogoliubov approximation Alternative using density matrix.. Spin waves in ferromagnets Strictly speaking, a quantum liquid is a spatially homogeneous system of strongly interacting particles at temperatures sufficiently low that the effects of quantum statistics are important. In practice the term is used more broadly, to include those aspects of the behavoir of conduction electrons in metals and degenerate semiconductors which are not sensitive to the periodic nature of the ionic potential. See also Quantum fluid and Quantum spin liquid –> See Robert Littlejohn's notes. Other techniques Quantization of Constrained Systems Enhanced Quantization: A Primer https://en.wikipedia.org/wiki/Ramsauer%E2%80%93Townsend_effect Based on the density matrix. Naturally extends the classical formalism of Statistical physics A Programming language for Statistics Good IDE: RStudio R programs on the web!: Shiny See lynda.com lectures Random stochastic automata? REGAL: a library to randomly and exhaustively generate automata http://regal.univ-mlv.fr/
Enumeration and random generation of possibly incomplete deterministic automata
http://www.swmath.org/software/791 Enumeration and Generation of Initially Connected Deterministic Finite Automata (implemented in FAdo). See Dynamical Instability in Boolean Networks as a percolation Problem, Boolean network Random Boolean networks: Analogy with percolation Lattice sites can be divided into two groups: sites susceptible to damage, and sites stable against damage. If the initially flipped centre spin belongs to an infinite connected network of sites susceptible to damage, then the initially small damage will spread over the whole system. A scaling theory for the Kauffman model, analogous to that for percolation, is presented in the Appendix. From simulations it is observed that moving sites, i.e. those not having local period one, cluster together into groups of connected neighbours. These clusters are ramified, similar to those of percolation theory. Indeed, for p below p, one only has clusters of finite periods, whereas for p above p, we find, besides these finite clusters of finite periods, one infinite cluster of infinite period in addition. In another set of simulations, the ratio of final to initial damage is interpreted by Derrida and Stauffer (1986) as a susceptibility, similar to the ratio of magnetization to magnetic field in ferromagnets. Indeed, simulations indicate that this quantity diverges if p approaches pc from below. The long-time limit of the damage for infinitesimal initial damages follows a typical second-order phase transition curve. Random automata, Deterministic finite automaton Enumeration and Generation of Initially Connected Deterministic Finite Automata implemented in python FAdo library. Initially connected means that, for each state q there exists a directed path from the distinguished start st ate to q . I think another name for an automaton or an state of one, with this property, is accessible. Using Analytic combinatorics Functional graph (see article) corresponding to a total map from (set ) to itself, consists of components, each a cycle of trees (a forest whose roots are connected by a cycle). Note that the nodes in the trees have edges pointing toward the root. This combinatoric structure emerges from the constraint that the out-degree is exactly for all nodes in the functional graph. As an example of applying the symbolic method, and singularity analysis of analytic combinatorics, they find the asymptotic value of the average number of cyclic points (points (nodes) belonging to a cycle), which is , being the number of points.. See definitions of transition structure, automaton, accessible automaton, etc in article. One can also show that the expected number of points with in-degree (garden-of-eden points) is, asymptotically . One can also show that with high probability a transition structure is not accessible. We look at the set of -node transition structures whose nodes have in-degree at least , except, possibly the initial state (call the set ). This set has asymptotically the same cardinality as the set of accessible transition structures, up to a multiplicative constant. It's easy to show that there is a bijection between this set and the set of all surjections between and . The number of this is, asymptotically, where , and , are computable constants. Note that because , this number is much smaller than , the total number of transitions structures. This agrees with the previous argument that accessible structures are sparse. Also note that is the probability that a random map between and is a surjection. Good showed this (see ref in article). See more remarks in article. A more relevant question may be the number of isomorphic classes of accessible automata; however, symmetries (just like in Feynman diagrams), make the counting difficult. However, for accessible automata, the counting is simplified, due to a certain bijection, and the number of elements per isomorphic class is . More References on random deterministic automata On the Probability of Being Synchronizable An algorithm for road coloring Graph structure of random automata Diameter and Stationary Distribution of Random r-out Digraphs The graph structure of a deterministic automaton chosen at random – slides What about the giant out-component? They don't talk about it !? Graphs with probabilistic properties Erdős–Rényi model The most common random graph model is the Erdős–Rényi model. Random connections among a given set of nodes. Configuration model http://tuvalu.santafe.edu/~aaronc/courses/5352/fall2013/csci5352_2013_L11.pdf Random graph with given degree distribution See this chapter .... See Newman's book on Networks Random Graphs, Geometry and Asymptotic Structure https://www.youtube.com/watch?v=pylTEAyUQiM Sample calculations Average number of edges between two nodes is in the limit of large size. This is approximately equal to the probability of an edge between the two nodes in the limit of large size too. Excess degree distribution Generating functions for the small components See derivation in problem sheet or notes or book, using generating functions (in particular it's "power" property where the g.f. of a sum of independent random variables is the product of g.f.s of these rand. vars.) Giant component Can find expression for size of giant component. One can then derive condition for existence of giant component in the configuration model. It is called the Malloy-Reed condition: GCC = giant connected component. Degree-triangle model Variant, that has tunable clustering coefficient. aka random map model, or random mapping For each point in phase space, one chooses at random another point in phase space as being its successor in time, i.e. we have a random map, , from a finite Set of points, to itself. It can be shown to be a limiting case of a Kauffman Random Boolean network, with in-degree . To each attractor (labelled by ), we assign a weight corresponding to the fraction of points in its basin of attraction. See also Analytic combinatorics Joint probability distribution of two attractor weights is for large Probability that the map is indecomposable and the attractor is of period : See Probability Distributions Related to Random Mappings , A Property of Randomness of an Arithmetical Functions The average number of attractors is Probability that a randomly chosen point falls into an attractor of weight and period Probability that a randomly chosen point ends up on an attractor of weight : For large , this gives where . This gives the average , and variance In Probability Distributions Related to Random Mappings , some of the above results are extended to the case without self-1-loops, , and where the function is one-to-one The random map model: a disordered model with deterministic dynamics Probability Distributions Related to Random Mappings A Property of Randomness of an Arithmetical Functions The Expected Number of Components Under a Random Mapping Function Probability of Indecomposability of a Random Mapping Function Probability Distributions Related to Random Transformations of a Finite Set Weighted Random Mappings; Properties and Applications. Some remarks about computer studies of dynamical systems Random-Energy Model: Limit of a Family of Disordered Models
– Random-energy model: An exactly solvable model of disordered systems See Random matrix theory, Disordered system, Simplicity bias in finite state transducers, Finite state channel Products of Random Matrices in Statistical physics Random matrix products and measures on projective spaces pdf Capacity of Finite State Channels Based on Lyapunov Exponents of Random Matrices Mellin transform of RMPs Topics in Products of Random Matrices Properties of the columns in the infinite products of nonnegative matrices Supersymmetric approach to random band matrices - Tatyana Shcherbyna NCTS Scholar Lectures: Mini Course on Random Matrices (I) How Large is the Norm of a Random Matrix? Discrete Random Matrices -- 2009 Moursund Lectures, Day 3 Top Eigenvalue of a Random Matrix: A tale of tails - Satya Majumdar Random_matrices,_random_processes https://terrytao.wordpress.com/category/teaching/254a-random-matrices/ Properties of networks with partially structured and partially random connectivity A random walk is a path across a network created by taking repeated random steps. They are usually allowed to traverse edges more than once, and visit vertices more than once. If note it is a self-avoiding random walk. We consider a random walk where at each vertex one will take a step (i.e. walker does not stay in vertex) along each of the edges connected to it, with uniform probability, i.e. with probability , where is the degree. Thus, on an undirected network we have: or Where is the probability that the walker is at vertex at (discrete) time , and where . One can also write this relation in terms of the reduced adjacency matrix, , and that can be useful sometimes. We are interested in the limit as where we expect the probability to approach a steady state : , which can be rewritten as , so is an eigenvector of the Graph laplacian () with eigenvalue , but we known (see Graph laplacian) that in a connected network only the vector has eigenvalue . Therefore , so normalizing (see Degree of a vertex (Graph theory)) With a random walk, an interesting question is that of the mean first passage time, or the mean number of steps before reaching a certain node, when starting from a given node. To find this we consider an absorbing random walk, where a walk that arrives at a certain set of vertices (we will consider just one, call it ) will stay there. We can then consider the probability of being at vertex at time . This is the same as the probability that the first passage time is equal to or less than , and thus the probability that it is exactly is , and the mean first passage time is: Note that we can't rearrange terms in this sum, because it is not absolutely convergent! Following the manipulations shown in Newman's book (section 6.14), we get to: where the prime indicates that the th element, or the th row and columns have been removed. In particular is called the th reduced Laplacian. This can be re-expressed a bit further, following Newman's book, for computational convenience. Resistor networks Kirkoff's current law can be written as: where is an external current applied at some node in the network. This can be written in terms of the Graph laplacian as: () where is the vector of voltages. is not invertible, but this cooresponds to the arbitrariness in the value of voltages , which can be all shifted up and down and still satisfy the equation. This is equivalent to adding a multiple of the vector, which we know to have a eigenvalue of the Graph laplacian. However, if we fix the voltage at some node (to be say), then we can remove the corresponding columns and rows from the equation (), and the eigenvalue is removed, and the reduced Laplacian is now invertible, so we can get the voltages, and thus the currents! Random walk sampling method for social networks Random walk betweenness measure. A family of probabilistic models invented by Fortuin and Kasteleyn which include Percolation, and the Ising and Potts models as special cases. The configuration space of the random-cluster model is the set of all subsets of the edge-set , which we represent as the set . The model may be viewed asa parametric family of probability measures on . When , we recover bond Percolation, when , we have the Ising model, and when we have different versions of the Potts model. It turns out that long-range order in a Potts model corresponds to theexistence of infinite clusters in the corresponding random-cluster model. In this sense the Potts and percolation phase transitions are counterparts of one another. Reference: Grimmet - The Random-Cluster Model Averaged version of a Master equation. Used, for instance in Chemical kinetics, and in Epidemiology. Proof that square root of two is irrational Imagine the Pythagorean squares associated with the sides of a right-angle triangle with equal leg sizes. By the Pythagorean theorem, the square corresponding to the hypotenuse has the same area as the sum of the squares of the legs. Now if the ratio was a rational number, then one could choose a size for squares such that an integer number of them fitted on the sides of the triangles, and similarly the squares could be partitioned into these unit squares. This means that the number of unit squares in the big square equals the sum of the number of unit squares in the squares of the legs, but as these are equal, this is just twice the number of unit squares from one leg. Therefore the number of unit squares in the big square must be even. If the number of unit squares is even, cutting the square in half perpendicular to a side should give an integer number of squares. If the number of unit squares in a side wasn't even, cutting by a half would cut unit squares by a half. And there would be as many of these half unit squares in each half as there are unit squares in a side of the square. As the number in each half must be integer, this number must be even, which is a contradiction. Therefore, the number of unit squares in a side is even. On the other hand, if we begun choosing the minimum ratio between the sides, this means that the number of unit squares in a leg is not even, for it were we could just halve the number of squares in both. Now considering cutting the initial right-hand triangle in half parallel to the hypotenuse. As the number of unit squares in the hypotenuse was shown to be even, the number of unit squares in the half-hypotenuse is an integer. Now the triangle formed by this segment and the leg of the original triangle is geometrically similar to the original triangle, and their sides are partitioned into integers. However, the leg, which acts as the hypotenuse of the new triangle, doesn't have an even integer number of unit squares, while we just showed that it should. Therefore, the initial assumption that there was such an integer ratio must be wrong. http://people.idsia.ch/~juergen/ray.html https://en.wikipedia.org/wiki/Ray_Solomonoff Ray Solomonoff (1926-2009), pioneer of Machine learning, founder of Algorithmic Probability theory, father of the Universal Probability Distribution, creator of the Universal Theory of Inductive Inference. First to describe the fundamental concept of Algorithmic Information or Kolmogorov Complexity. In the new millennium his work became the foundation of the first mathematical theory of Optimal Universal Artificial Intelligence. A framework for Frontend web development -> Class component. Can have state. -> Stateless function component. Doesnt have state Values and methods passed to a component when we use it (like arguments) proptypes, default properties values and methods managed by the component itself. way of referencing an instance of a component from within a react app. It's like a DOM id of a component, that you can use to refer to that component. Adding or removing components to the dom is called mounting and unmounting. vid updating. Even if we use A Renormalization group scheme based on coarse-graining and rescaling over real space. Real-space renormalization group and percolation See Critical phenomena in percolation Renormalization Group Theory - Percolation. In particular, see here. A real-space renormalization group for site and bond percolation Recurrent neural nets. Vanishing gradient problem, naively, RNNs don't give you long term memory.. so you have Long short-term memory networks See Percolation Section on percolations on Mason and Gleeson's book on Dynamical processes on networks, and on Newman's networks book. In particular, see Newman's book chapters 12, 13, and 17, for detailed calculations of GCC sizes, and other ones. Note that the standard calculation determines whether a GCC exists for an infinite network (for instance, the locally tree-like assumption is valid for infinite networks, and other parts of his calculations assume infintie size). Finite size effects should be interesting to explore. Recent advances in percolation theory and its applications See Complex systems LectureNotes. See Random deterministic automata ON THE NUMBER OF DISTINCT LANGUAGES ACCEPTED BY FINITE AUTOMATA WITH n STATES EnumerationofAutomata,Languages,and Regular Expressions The state complexity of random DFAs On the average state and transition complexity of finite languages REGAL: A Library to Randomly and Exhaustively Generate Automata in C++ ! Distribution of the number of accessible states in a random deterministic automaton Enumerating Finitary Processes Sampling different kinds of acyclic automata using Markov chains Random walk on sparse random digraphs A Hitchhiker's Guide to descriptional complexity through analytic combinatorics A Survey on Operational State Complexity An Introduction to Descriptional Complexity of Regular Languages through Analytic Combinatorics Mixing times of random walks on dynamic configuration models Book: The Duffing Equation: Nonlinear Oscillators and their Behaviour See MMathPhys miniprojects and Duffing oscillator More papers and references: https://en.wikipedia.org/wiki/Intermittency https://en.wikipedia.org/wiki/Crisis_%28dynamical_systems%29 Y. Ueda, Steady Motions Exhibited by Duffing’s Equation: A Picture Book of Regular And Chaotic Motions Catastrophes with Indeterminate Outcome Stewart, H. B. ; Ueda, Y. EXPLOSION OF STRANGE ATTRACTORS EXHIBITED BY DUFFING'S EQUATION - Yoshisuke Ueda Common dynamical features on periodically driven strictly dissipative oscillators (introduces torsion and winding numbers) Comparison of bifurcation sets of driven strictly dissipative oscillators Wada basins https://en.wikipedia.org/wiki/Lakes_of_Wada Wada basin boundaries and basin cells Other link Unpredictable behavior in the Duffing oscillator: Wada basins Experimental investigation of the response of a harmonically excited hard Duffing oscillator From here Analytical methods Exact analytical solutions for forced cubic restoring force oscillator Uses Jacobi elliptic function (only for undamped Ueda oscillator I think). A comparison of classical and high dimensional harmonic balance approaches for a Duffing oscillator Second order averaging and bifurcations to subharmonics in duffing's equation Subharmonic Oscillations in Nonlinear Systems Chaotic states and routes to chaos in the forced pendulum Organization of periodic orbits in the driven Duffing oscillator Structure in the bifurcation diagram of the Duffing oscillator superstructure in the bifurcation set of the duffing equation General case of crisis-induced intermittency in the Duffing equation for double-well Duffing oscillator. On the jump-up and jump-down frequencies of the Duffing oscillator More books: Chaos in Nonlinear Oscillators: Controlling and Synchronization
By M Lakshmanan, K Murali Antimonotonicity reversal of period-doubling cascades Discriminative Supervised learning where the output value is continuous, and quantiative (i.e. it has an ordering, and a notion of closeness (matrix)). Regularizer helps control the model complexity (by constrianing the size of the parameter vector ). It can also be seen as adding a prior (in Bayesian statistics) See Machine learning https://en.wikipedia.org/wiki/Markov_decision_process A Markov decision process is a 5-tuple , where (Note: The theory of Markov decision processes does not state that or are finite, but the basic algorithms below assume that they are finite.) The core problem of MDPs is to find a "policy" for the decision maker: a function that specifies the action that the decision maker will choose when in state . Note that once a Markov decision process is combined with a policy in this way, this fixes the action for each state and the resulting combination behaves like a Markov chain. The goal is to choose a policy that will maximize some cumulative function of the random rewards, typically the expected discounted sum over a potentially infinite horizon: where is the discount factor and satisfies . (For example, when the discount rate is r.) is typically close to 1. Because of the Markov property, the optimal policy for this particular problem can indeed be written as a function of only, as assumed above. MDPs can be solved by Linear programming or Dynamic programming. Dynamic programming approach The algorithm has the following two kinds of steps, which are repeated in some order for all the states until no further changes take place.
They are defined recursively as follows: will contain the discounted sum of the rewards to be earned (on average) by following that solution from state . Their order depends on the variant of the algorithm; one can also do them for all states at once or state by state, and more often to some states than others. As long as no state is permanently excluded from either of the steps, the algorithm will eventually arrive at the correct solution. There are variants, in particular value iteration and policy iteration described in the Wiki page. Deep reinforcement learning See Nando's lectures OpenAI Gym Example: https://github.com/joschu/modular_rl Pavlov.js - Reinforcement learning using Markov Decision Processes See also Decision theory A relation is a subset of a Cartesian product. A relation is often used to refer to a binary relation, which is a subset of . An element is said to be related to (denoted ) if the pair . A relation on is used to refer to a subset of . A Function defines a relation, but not all relations correspond to functions. Examples of relations Total ordering –
Partial ordering In 1969, Fortuin and Kasteleyn (FK) [27,28,103,104] found an interesting mapping between the q-state Potts model, which includes the Ising model for q = 2, and a correlated bond-percolation model called the random-cluster model. It can be shown that there is a one-to-one correspondence between different
thermodynamic quantities and their geometric counterparts based on the statistical and fractal properties of FK clusters. This allowed powerful renormalization group ideas to be used [74]. Swendsen and Wang [105], and then Wolff [106], have exploited this mapping to devise extraordinar- ily efficient Monte Carlo algorithms. There are mappings between the Ising model at a given dimension and a model of manifolds surrounding the geometric spin clusters. Percolation and the Potts model. Many of the tools of Statistical physics have been applied to percolation through these mappings. See also Critical phenomena, field theory... A method to obtain macroscopic properties from microscopic theories, among other things. The general framework (as applied to critical phenomena) is presented below. For other applications, the later steps will be different, but the general setup is the same. 1. Define RG scheme (often involving coarse graining and scaling; this is the case, for instance, in Real-space renormalization group), that defines new variables, while leaving the partition function fixed (or at least approximately fixed). 2. This scheme produces an RG transformation on the couplings/parameters of the theory. This transformation, if iterated, produces an RG flow in the space of parameters. The flow can indeed be analyzed with the tools of the theory of Dynamical systems 3. Any point near a fixed point in the space of parameters has relevant and irrelevant (and possibly marginal) directions. These correspond to natural coordinates related to the unstable and stable manifolds of a fixed point, which in the linear neighbourhood of the fixed point are called scaling variables. Relevant directions are the ones that determine the long-time dynamics under the RG flow. 4. Changes in tunable parameters of the theory (like temperature, volume, external magnetic field, etc.) can be related to changes in coupling constants that produce the same change in the free energy. These changes should be along relevant directions because tunable parameters can affect the qualitative macroscopic behaviour of the theory, and so should affect the long-time behaviour of the theory under the RG flow. 5. A critical surface corresponds to the stable manifold of a saddle fixed point (this manifold is also called separatrix in Dynamical systems theory, because it separates qualitatively different future flows, corresponding to different phases, in a physical system). A critical point of a family of theories parametrized by a parameter, and spanning a 1D manifold (curve) in the whole space of theories is the intersection of this curve with the critical surface. 6. Theories near the critical point evolve to the vicinity of the fixed point under a finite number of iterations of the RG transformation. Theories with slightly different tuning parameters evolve to slightly different points in the vicinity of the critical point. In particular, it can be argued that for a bicritical point (with two relevant directions), there will be a relevant variable that corresponds to "thermal" deviation, , and another to a "magnetic" deviation, (see Cardy's book for some more explanation). Here deviations refer to deviations from the critical point. These relations are linear simply because we are taking and to be small (near the critical point), and we have Taylor expanded (assuming relation is analytic). , and are called scaling factors and are non-analytic. 7. From the RG scheme, one easily derives how , and change close to the fixed point, using the linearized RG flow. From the RG scheme, one can also easily find how the free energy (per volume, or per site), changes under RG flow, and thus how it changes under changes of , and . 8. Finally by relating , and to , and , the renormalization group allows us to find how the free energy changes under changes of thermodynamic variables (, and ), and thus it allows us to find thermodynamic coefficients and quantities (which are derivatives of w.r.t to thermodynamic quantities, such as , and ), as functions of the thermodynamic variables , and . These often have power law form, and from them we can extract critical exponents. These critical exponents turn out to depend just on the dimensionality, and the eigenvalues of the relevant variables near the fixed point. Thus, any theory with a critical point flowing to this same fixed point will have the same critical exponents, and is said to belong to the same universality class. These last steps can be seen carried out for the case of the spin-block transformation (a particular RG scheme) in Cardy's book, or in this page. The resulting form of the free energy is: where is known as a scaling function The scaling coefficients turn out to be: Scaling relations relate the critical exponents as explained in picture. Sometimes, because of the generality of this, the above form of the free energy is assumed instead of derived from RG, and this is known as the scaling hypothesis. See this series of videos: 6. The Scaling Hypothesis Part 1 A process in which you revert the osmotic flow by applying a pressure larger than the osmotic pressure, has many applications in Industry, for instance in desalinization technologies. See Osmosis See also Piezodialysis for alternative. Rheology is a branch of Continuum mechanics that studies the flow of matter, primarily in a liquid state, but also as 'soft solids' or solids under conditions in which they respond with plastic flow rather than deforming elastically in response to an applied force. That is, rheology does not study a particular class of matter, but the flow of any matter. Holonomic systems. A robot is holonomic if all the constraints that it is subjected to are integrable into positional constraints. Control theory and control systems Deep learning for grabbing objects http://techcrunch.com/2016/03/08/what-could-go-wrong/. http://googleresearch.blogspot.co.uk/2016/03/deep-learning-for-robots-learning-from.html. ArXiv paper Research in China: ArXiv paper News A rubber is a viscoelastic Polymer (also called elastomer). What makes it viscoelastic is most often that the polymer is cross-linked (though not too cross-linked, as that can lead to rigid materials). Traditionally, cross-linking was done by exposing natural latex to sulfur, a process known as vulcanization. Although rubbers are viscoelastic, there is really a continuum between solid and viscoelastic, and some are closer to solids, with others are more clearly viscoelastic. Silly putty is interesting (apart from fun), because it has viscoelastic properties, but the polymers it's made of are not cross-linked, they are just very long! Viscoelastic Behavior of Rubbery Materials Glass transition temperature There is a temperature, called the glass transition temperature, below which a cross-linked polymer stops being viscoelastic (and thus a rubber), and becomes glassy, and hard. Above the glass transition temperature, the polymer chains are loose and floppy, and that's why a rubber classifies as a soft material. Rubbers are also thermoplastic. Networks with power-law degree distributions are sometimes called scale-free networks. A power law degree distribution has the form: where is the exponent, and is found in many examples of real-life networks, and in many other places (see Power laws). Values are typical. Also typically, the power law is only obeyed for the tail of the distribution, but not for small values of . And typically it is also not obeyed in the high end, for example, due to some cut-off. Detecting and visualizing power laws The simplest approach is a log-log plot of the histogram of the degree distribution (see Large-scale structure of networks). One problem is that the tail of the distribution, where the power law is usually followed, often has very few samples, and so statistical fluctuations are relatively larger, and make it hard to judge if the distribution follows a straight line in the log-log plot. Finding the right bin size is a way to improve this, but this is always a matter of compromising larger bins to reduce statistical error on tail, and smaller bins to get more detail of the distribution. An even better strategy is to increase the size of bins for larger degrees (normalizing by bin size so that the different bins can be compared). A way to do this is with logarithmic binning, where each bin is a constant factor larger than the previous bin, often . Another way to detect power laws is by using the cumulative distribution function, , which is the probability that the degree of a vertex is or larger (i.e. ). If follows a power law (for say), then also does approximately for those (as can be shown by approximating the sum by an integral), with exponent . As plotting this function does not require binning (as the noise gets smaller in the cumulative distribution, and is smallest in the tail!), it doesn't throw away information. One way to get this information is via the ranks of the vertices, i.e. their position in a list ordered in descending order (this agrees exactly with their cumulative frequency, if no nodes have same degree, and this is approx true for the tail of distribution). These plots are often called rank/frequency plots. One disadvantage of cumulative distribution functions is that nearby points are correlated, and so a linear fit using standard techniques (like least squares) which assume independence of points, give biased answers. In fact this is also true for the degree distribution function itself, although for different reasons ([72,141] in Newman's book). [72] has many details including a formula for determining from the data directly (the most reliable way), and other useful results and tools. For more properties see Power laws. Another important characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. Introduced by Oded Schramm around 2000: Scaling limits of loop-erased random walks and uniform spanning trees. With applications to Percolation theory. Good reviews of SLE for physics; The knowledge, methods, and everything else regarding the understanding of the Cosmos. This includes essentially structures based on logic (and Mathematics, in general) that must match what is observed in the Cosmos. See The Scientific Method-Richard Feynman, and Philosophy of science Lay the concrete foundation for the rest of the sciences, by looking at fundamental structures and ideas. From the more theoretical to the more applied: Philosophy of science -> Mathematics -> Theoretical computer science -> Mathematical methods and Scientific computing Portal:Contents/Mathematics and logic Natural science is often defined as the part of science studying natural phenomena (that is, those not cause by Humans). These are in some sense, the foundational sciences, as everything (including Humanity) is ultimately part of Nature (the Physical world). Roughly, we can categorize the natural sciences, in order of the complexity of the studied systems, forming a sort of hierarchy, or emergent new phenomena: Physics -> Chemistry -> Biology -> Cognitive science Portal:Contents/Natural and physical sciences Systems sciences studies very complex natural phenomena, as well as human phenomena (which are of course, a result of natural phenomena, but often of the highest complexity we know). It is the application and integration of the more reductionist ideas of the foundational sciences to larger systems. Systems Sciences are the highest level of complexity, looking at parts of the Cosmos made of many parts interacting in complex ways. Some of the most important ones are Social sciences, which look at societies (large collections of complex agents). Portal:Contents/Society and social sciences The distinctions above are fuzzy, and a bit ambiguous. This is partially because of the History of science being very complex, and with conflicting ideas of how science should be organized. However, as can be seen from above, my preferred way of organizing it is an approximate hierarchy of complexity: from simple (reductionist) laws to complex systems. Wikipedia:Portal/Directory/Science and mathematics Thaumaturgy in the Age of Science by Prof. V. Balakrishnan Free MIT books: https://archive.org/details/mitlibraries Crowdfunded science: https://experiment.com/ http://colorfulengineering.org/SCICOMP.html Numerical methods for differential equations Gentoo Science Overlay has a nice collection of scientific computing software Self-assembly There are two main types: Self-assembly of active colloids DNA nanotechnology is mostly based on self-assembly http://phys.org/news/2016-02-scientists-gold-nanoparticles-diamond-superlattices.html Self-assembly, modularity, and physical complexity Nature Materials: Topological defects in liquid crystals guide self-assembly The Free-Energy Landscape of Clusters of Attractive Hard Spheres Dynamical Arrest in Attractive Colloids: The Effect of Long-Range Repulsion The Information Capacity of Specific Interactions Self-Assembly of Structures with Addressable Complexity Size limits of self-assembled colloidal structures made using specific interactions A geometrical approach to computing free-energy landscapes from short-ranged potentials Design principles for self-assembly with short-range interactions Hierarchical self-assembly Error correcting self-assembly Active colloid, Self-assembly, Collective behaviour of active colloids Self-assembly of active colloidal molecules with dynamic function Self-Assembly of Catalytically Active Colloidal Molecules: Tailoring Activity Through Surface Chemistry online While individual colloids that are symmetrically coated do not exhibit any form of dynamical activity, the concentration fields resulting from their chemical activity decay as 1/r and produce gradients that attract or repel other colloids depending on their surface chemistry and ambient variables. This results in a nonequilibrium analog of ionic systems, but with the remarkable novel feature of action-reaction symmetry breaking. See Collective behaviour of active colloids for further derivations of similar effective interactions between active colloids. The effective interaction, in the far field regime turns out to be analogous to the Coulomb interaction with generalized charges, that break action-reaction symmetry. In particular, we differentiate between the charge that produces the field, α , and the charge that responds to the field, μ . Model and simulation: There is a highly successful and widely used restricted primitive model (RPM) for charged colloid based on Coulomb interactions augmented with short-range steric repulsion between the particles. A generalization is done to the nonequilibrium active colloids, and the model is analyzed using Brownian dynamics simulations, to explore novel phenomena in this system. Periodic boundary conditions are used, and interactions are treated using the minimal image convention (what is this?) Approximations for simplicity, use a model in which the catalytic activities of the colloids are simplified into net production or consumption of chemicals with given rates. They also assume the substrate concentration is constant within the time of their simulations, which is a good approximation in the dilute limit. we do not consider the anomalous superdiffusion at relatively short time scales In the studied experimental systems, the Peclet number is small (Peclet number is , where is the velocity of colloid, is its diameter, and is the diffusion coefficient of the solute molecules). This means that the solute concentration profile relaxes very quickly to a comoving cloud when a colloidal particle moves. At finite , the cloud is distorted. This also mean that we can ignore the spontaneous symmetry breaking (spontaneous autophoretic motion of isotropic particles) at large . Concentration fields are assumed to be far-field. Near-field fields would have to be calculated by solving the diffusion equation, and the resulting forces will in general not be pairwise additive. However, the forces retain the action-reaction asymmetry, and will only affect the dynamics quantitatively. Hydrodynamic interactions are ignored, but their effect would just change the dynamics quantitatively (and not qualitatively). See more details of the model here. For the results they use to estimate the effect of hydrodynamic interactions see Hydrodynamic simulations of self-phoretic microswimmers Brownian dynamic simulation is done so that the colloids are constrained to move in 2D (while the diffusing particles diffuse in 3D, so the concentration still decays as ). Non-equilibrium effects When the effective interactions between the particles are not symmetric, the system cannot reach an equilibrium state because the condition of detailed balance will not be fulfilled. This can manifest itself in the form of frustration that leads to nonequilibrium fluxes. This also mean that the long time behaviour may include limit cycles (oscillatory instability see below). The internal dynamics of quasi-stable (for small perturbations) clusters for the case of two kinds of particles (A and B) can be analyzed using d'Alembert's principle (see their Appendix). A Hopf bifurcation can take place (where the parameters are the charges of the two kinds of particles), so that in a certain regime a stable limit cycle forms. This is the oscillatory instability. This is demonstrated in the A4B8 colloidal molecule. What symmetry makes the second harmonic absent? Probably some dynamical symmetry In the AB3 molecule one finds that in many parameter regimes, there are two stable configurations, and the system stochastically jumps between the two. One of the configurations has the B colloids symmetrically placed around the A, while in the other they are asymmetrical, causing (due to the asymmetry of the forces of the colloids in the fluid) a net self-propelling velocity. The motion of the internal degrees of freedom is again derived using d'Alembert's principle. There is an angle variable which is cyclic, due to rotational invariance, and gives a conservation law. The other two angles follow a set of coupled ODEs which have equilibria corresponding to the stable configurations. By simplifying the dynamics to the line where the two angles are equal (because both equilibria lie on it), one can obtain a single-variable Langevin equation and a corresponding Fokker-Planck equation to study the probability distribution of the system, which can be used to find, for instance, how much time is spent on run vs tumble behaviour. This was measured from the Brownian dynamics simulations. The residence times in the run-and-tumble phases exhibit an exponential dependence on the value of . The measured behaviours are consistent with what we expect from Kramers’s first-passage time theory A quantity is self-averaging if its sample to sample fluctuations vanish in the thermodynamic limit. Non-self-averaging quantities are characteristic of Disordered systems In Self-diffusiophoresis (a kind of self-propulsion), a particle itself produces the compound it interacts with, through Diffusiophoresis, causing it to move. Self-phoretic particle. Creates something that it then attracts or repells, and that something then pushes the surrounding fluid (creating a slip velocity). The particle is then indirectly pushing on the fluid.
Same kind of indirect propulsion as ionocrafts! Another analogy for the symmetric catalytic Active colloids, in the limit that particles of type B are attracted to A, but A is not attracted to B (see this paper): B particles are like little homing missiles that target A particles. An example is a particle that catalyzes the reaction 2H2O2 → 2H2O + O2, creating an O2 gradient and interacting with it. Another example is a particle that facilitates the polymerization of a biopolymer (e.g. actin), which creates a gradient because individual monomers diffuse, whereas the polymers do not. The latter process is one possible mechanism for the propulsion of Lysteria bacteria by means of actin 'comet tails'. http://www.sas.upenn.edu/~tidema/research.html –Propulsion of a Molecular Machine by Asymmetric Distribution of Reaction Products– Propulsion of a Molecular Machine by Asymmetric Distribution of Reaction Products (article) "For a totally impermeable particle, depletion of the molecules near its surface causes a lateral slip velocity that results in net motion of the sphere. ". Depletion only if the mobility is positive, which corresponds to the surface of the particle repelling the solvent molecules thus depleting The diffusiophoretic effects also turn out to contribute to the diffusion of the particle (the induced velocities have a random component), with a diffusion constant that can be estimated. Consideration of rotational diffusion is important, as it determines the time scale over which the particle is able to move consistently in a given direction. Dynamics and efficiency of a self-propelled, diffusiophoretic swimmer Self-Diffusiophoresis in the Advection Dominated Regime Concentration around a self-diffusiophoretic particle See Diffusiophoresis for the equations giving the drift velcity of the particle given a particular concentration distribution on its surface (found from above equation). a.k.a. driverless car See Tesla, Google Car, etc. Our driverless dilemma Video: When is it OK for our cars to kill us? http://www.theverge.com/2016/6/30/12072408/tesla-autopilot-car-crash-death-autonomous-model-s Volvo See Electrophoresis Self-electrophoretic locomotion in microorganisms: Bacterial flagella as giant ionophores Ion Drive for Vesicles and Cells Colloid Transport by Interfacial Forces Ionocrafts, ionic wind?.. Locomotion of electrocatalytic nanomotors due to reaction induced charge autoelectrophoresis Catalytically Induced Electrokinetics for Motors and Micropumps Chemical Sensing Based on Catalytic Nanomotors: Motion-Based Detection of Trace Silver See Electrokinetic effects in catalytic conductor-insulator Janus swimmers Self-organization in non-equilibrium thermodynamics - Book by Prigogine et al Information Measures of Complexity, Emergence,Self-organization, Homeostasis, and Autopoiesis https://www.youtube.com/watch?v=Ba0zSNYkWtw http://pcp.vub.ac.be/SELFORG.html The Meaning of Self-organization in Computing. See Complexity theory, Complex systems, Sloppy systems Several people at the Free University of Brussels seem to be working on complex systems, from a very holistic approach. Evolution, and feedback. How does one define evolving systems that accomplish a desired function? We need the right feedbacks in a complex system. But the answer is not obvious. See Evolutionary computing On Self-Organizing Systems and Their Environments Principles of the self-organizing system http://bactra.org/thesis/single-spaced-thesis.pdf Self-Organisation of Symbolic Information See Written language. Selforganization of symbols and information Self-organizing map in unsupervised Machine learning An active particle, often a colloid, or a nanoparticle, that propels itself through a fluid, often via some phoretic mechanism, or via some mechanical propulsion mechanism (the particles are then often called microswimmers). Generally, "active colloid" simply refers to a self-propelled colloid (and similarly with "active particle" in general). "In the current miniaturization race towards small motors and engines, a rapidly expanding subdomain is the quest for autonomous swimmers, able to move in fluids which appear very viscous given the small length scales (low Reynolds number). Robotic microswimmers that generate surface distortions is an avenue (e.g. by mimicking sperms [1]), but it seems equally interesting to try to take advantage of physical phenomena that become predominant at small scales. Interfacial ‘phoretic’ effects (electrophoresis, thermophoresis, diffusiophoresis, [2]) by which the gradients of fields (electrostatic potential, temperature, concentration) drive the motion of colloid particles, are from this standpoint a natural avenue given the increased surface to volume ratio of smaller objects. " Microscopic artificial swimmers Designing phoretic micro- and nano-swimmers. A common design for phoretic swimmers is the Janus swimmer design https://scholar.google.co.uk/scholar?hl=en&q=self-propelled+particle&btnG=&as_sdt=1%2C5&as_sdtp Phoretic self-propulsion Propulsion of a Molecular Machine by Asymmetric Distribution of Reaction Products See Self-diffusiophoresis Designing phoretic micro- and nano-swimmers See more at Designing phoretic micro- and nano-swimmers Single phoretic swimmer stochastic dynamics Self-Motile Colloidal Particles: From Directed Propulsion to Random Walk (experiment) Anomalous Diffusion of Symmetric and Asymmetric Active Colloids Stochastic dynamics of self-propelled colloids Self-assembly of phoretic active colloids Self-assembly of active colloidal molecules with dynamic function See Self-assembly of active colloids Self-Assembly of Catalytically Active Colloidal Molecules: Tailoring Activity Through Surface Chemistry See Self-assembly of active colloids Collective behaviour Clusters, asters, and collective oscillations in chemotactic colloids See Collective behaviour of active colloids There is a lot of different regimes in their complicated mathematical models, and fuller understanding requires going through their models more carefully Emergent Cometlike Swarming of Optically Driven Thermally Active Colloids Collective Behavior of Thermally Active Colloids See Collective behaviour of thermally active colloids.
Others Electrokinetic effects in catalytic platinum-insulator Janus swimmers. See Catalytic conductor-insulator Janus swimmer, Electrokinetic effects in catalytic conductor-insulator Janus swimmers. See also: Locomotion of electrocatalytic nanomotors due to reaction induced charge autoelectrophoresis and Self-electrophoresis Boundaries can steer active Janus spheres See Boundary effects on the motion of active colloids Collective Behavior of Thermally Active Colloids (pdf) The motion of colloidal particles in a solution in the presence of an externally applied temperature gradient, which is known as thermophoresis or the Soret effect Since such thermally active colloids would create temperature profiles around them that decay as1=r, in addition to causing them to self-propel, thermo-phoresis could provide a mechanism for them to interact with one another in a solution. The long-ranged nature of the intercolloidal thermophoretic interaction could lead to interesting collective behaviors. Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system (numerical or otherwise) can be apportioned to different sources of uncertainty in its inputs. Global Sensitivity Analysis: The Primer A review on global sensitivity analysis methods A sequence space refers to the Set of all sequences of symbols, of a given length, where the symbols belong to an alphabet (another Set), which may be endowed with some more structure. More precisely, a sequence is the set of all functions from an index set to the alphabet set . This is the same as the set , where denotes Cartesian product. The sequence space can be notated . Two common examples of infinite sequence spaces are , where the index set is the naturals, and , where the index set are the integers. Members of this latter example are also called bi-infinite sequences. See this video As the sequence set is constructed as a Cartesian product, we can endow it with the Product topology. The alphabet set, if finite can be endowed with the Discrete topology Under this topology one can show that a set is closed iff there is a tree (a set of finite sequences, or strings) such that , where is the set of all paths through . See here for details and proof. Math 574, Lesson 1-5: Measures on Sequence Spaces As the Cylinder sets generate the Product topology, which in turn generates a Borel sigma-algebra on our space, then if we find the algebra generated by the cylinder sets, this algebra will generate the Borel-sigma algebra, and by the Caratheodory extension theorem, by defining a Measure on the sets of this algebra, we define a unique measure on the Borel sigma-algebra. In fact he shows that the set of finite unions of open cylinders (generated by the cylinder sets) themselves already form an algebra. This is because a finite intersection of open cylinders can be expressed as a finite union of another set of of open cylinders. Then, it turns out that we can define a unique measure on this algebra if we define the measure on the open sets only, and thus we can define a unique measure on the Borel -algebra of the sequence space. These sets are of the form , where this is the set of all sequences that begin with string (these are called the basic open cylinder given by ). This measure is simply constructed by using the Measure additivity axiom of countable unions of disjoint (non-overlapping) sets (here applied of finite unions, as it is an algebra), and use some properties of open cylinders under intersections, which convert other arbitrary unions of open cylinders to unions of disjoint sets. This is proved from the property in the following lemma as well as the next lemma. This latter lemma uses , which of course follows from the additivity property of measures. You also require some normalization condition, like where is the empty string, and thus is the set of all sequences that begin with the empty string, i.e. the full set. See also Symbolic dynamics, Shift space, Entropy reduction. See book on permutation entropy. Argument from Shannon code before proof of coding theorem in Info theorem book. constanc c is the description of the program to compute the probability distribution. You input that program, plus the description in the Shannon-Fano code to the Turing machine and it should be able to give you the string you want, so this constitutes a description of the string, and thus its length is an upper bound on the Kolmogorov complexity. If c is sufficiently small, i.e. the map is simple enough, the bound on the Kolmogorov omplexity will be more stringent, and thus the coding theorem approaches an equality more. This argument, however, only explains why if there is bias, in a simple map, one expects the bias to correlate with Kolmogorov complexity. But it doesn't explain why there should be bias in the first place. My arguments using transducers try to explain both, but it'd be nice to see how these two arguments fit Given a set , a -algebra on , is a subset of the Power set of (), s.t. (PP 1.2) Measure theory: Sigma-algebras From these axioms, one can show that a sigma-algebra is closed under countable intersections too. The sigma-algebra generated by , written as , is the "smallest" sigma-algebra containing . See here to see precise definition and why this always exits. A common example is the Borel sigma-algebra. A sigma-algebra can be generated by an algebra, as explained in the Caratheodory extension theorem See Measures and metrics for networks How can we measure the "similarity" of two nodes (or edges, etc.)? Two main approaches. Two nodes may be: Mathematical implementations of these ideas: Structural equivalence: Regular equivalence: Another kind is automorphic equivalence See page 23 in here, as well as discussion of automorphism in Graph theory.
A Similarity network is one that expressed how similar entities (expressed as the nodes) are. The degree of similarity being the weight of the node. The weight matrix represents level of similarity between entities and in the network. A similarity network is almost always complete (the only deviation from completeness is from nodes that can't be compared for some reason). For example, if we have a matrix of votes, we can define as: A simple contagion, is a property that spreads between individuals in such a way that an individual can get infected by simple exposure to another infected individual (possibly with a certain probability or rate). These are mostly compartmental models, and their extensions are used to model mostly biological contagions (like infectious diseases), as well as some IT contagions (like computer viruses). Often the model lives on a network that determines which individuals (nodes) interact (edges). Compartmental models are those in which the individuals can be on any of a number of states (often "susceptible", "infected", or "recovered"), and there are certain rules for the contagion. a.k.a susceptible-infected model. Just two states, "susceptible" and "infected". Susceptible individuals can get infected by infected individuals. Fully mixed SI model Assumes every individual has an equal probability (per unit time, i.e. rate) of meeting any other individual. A description is then made using a pair of Rate equations: or where and are the average number of susceptible and infected individuals, respectively, in a population of individuals, and and . Furthermore, is unchanged in time, so , and the above equation is equivalent to: which is the logistic growth equation. a.k.a susceptible-infected-recovered model or susceptible-infected-removed model. Adds the possibility of recovery (and subsequent immunity). Three states: "susceptible", "infected", and "recovered". Susceptible individuals can get infected by infected individuals. Individuals can recover after some time, and then become immune to new infections. The model can also be applied to when the third state corresponds to a dead individual, as in this case the individual also doesn't participate in the network of possible infectious transmissions (though there are some subtleties in some cases, see note in page 632 on Newman's book). For this reason the R sometimes refer to "removed", encompassing both cases. Simplicity bias is a bias observed in many GP maps (see Bias in GP maps), and in many Complex systems (which can often be looked as GP maps). Simplicity is defined as low complexity. Simplicity bias in discrete systems Simplicity bias in finite-state transducers See Activities and Sensitivities in Boolean Network Models Simplicity bias in Boolean threshold networks Discretized differential equations An example of Simplicity bias in discrete systems See Random automata and Evolving automata Numerical experiments on the simplicity bias in finite-state transducers On the theory/analysis side, I've been thinking about two questions: Ideas for understanding the simplicity bias in finite state transducers To have sufficiently high bias, we need a small non-coding loop. To have varied output, we need loops outside the non-coding regions. This is so that the time spent in non-coding regions can vary for different outputs. The slope of the designability/complexity plot corresponds approximately to the Topological entropy of non-coding region. Computed using Determinant of a graph. However, there's also a factor due to the conversion between {KC complexity} and {number of bits spent in non-coding region}. For first FST below, for instance, by computing KC for strings like and , I found that . Then from topological entropy, which is , we find , which is consistent with what I found from the graph, approximately . Now, this refers to the frequency, which is between and () If we average this quantity for , we get , which is close to the found from estimates above. See this paper about maximum LZ complexity, which goes like where is length of string. See here for desmos graph. Examples of finite-state transducers and their simplicity bias See related stuff in Descriptional complexity Information theory – Coding theory – Algorithmic information theory See Active matter for background. The path taken by a tracer will depend on the detailed spatial and temporal correlations of the velocity. Numerical simulations were conducted in Fluid transport by individual microswimmers. The striking feature of the tracer trajectories is their
, a consequence of the angular dependence of the flow field. Mathematically, it is because all terms in the multipole expansion, except the Stokelet are exact derivatives. The way this works: The entrainment effect is an example of Darwin drift. The Darwin drift volume has also been calculated for these active swimmers We can estimate the contribution to diffusion from the entrainment effect. We know that the Diffusion coefficient can be expressed in 3D as: The entrainment length (Darwin drift) is of order (the size of the swimmer), when close (within distance ) to the swimmer. Thus, , whenever there is a swimmer within a volume . If there are swimmers per unit volume, the probability that a swimmer is in a given region of volume is approximately . Therefore, . Now the characteristic time step , is the time scale that the swimmer travelling at speed takes to traverse the distance throughout which the swimmer interacts with the tracer particle. Therefore, There is also a contribution to diffusion from the random reorientations that real bacteria perform at approximately regular intervals (in their run and tumble behaviour). Is the contribution to the diffusion constant from random reorientations, or finite run lengths? I think the former, due to the disappearance of , the run length from the expression where is a measure of the swimmer's dipole strength. Because the addition of variances () for independent processes, we then have that the total diffusion coefficient is approximately the sum: For different kinds of systems, some of these diffusions coefficients will dominate. Zottl and Stark paper. Swimmer equations of motion, for swimmer in background flow : where \hat{\mathbf{e}} is the swimming direction of the point swimmer. In the case of Poiseuille flow, the equation determining, the angle of the swimmer follows the nonlinear pendulum equation (with ). When swimming upstream, any deviation for the centre line is subject to a restoring torque from the vorticity and hence the swimmer trajectory oscillates around the centre of the channel. Swimming downstream any perturbation about the centre line is amplified by the vorticity , and the swimmer tumbles in the flow. For sufficiently large velocities, it continues to tumble down-stream, otherwise it reaches the walls and and the simple theory must be supplemented by additional physics. One can also describe the motion of the swimmer in simple shear flow, and when there is tendency to swim, on average, in a particular direction, "-taxis" For instance, One can use these ideas, with shear, and gravitaxis (together often termed gyrotaxis), to explain, for instance, the formation of thin layers of phytoplankton in the oceans. Why micro-organisms often accumulate at surfaces First note, that a simple self-propelled rod, or sphere, when it eventually hits a surface, will then tend to move parallel to it, and only scape it, when a rotational fluctuation changes its direction enough to swim away from it. However, there is a less trivial effect, due to hydrodynamic interactions with the wall. These can be taken into account, because Stokes equations are linear, by considering an image swimmer at a position corresponding to the reflection of the swimmer on the wall, and pointing in the opposite direction (so as to satisfy the boundary condition of no normal flow at a free boundary (one that can slip; Like what? I mean, say a liquid-gas interface doesn't satisfy either no-slip or no normal flow, no? http://onlinelibrary.wiley.com/doi/10.1002/cpa.3160190405/abstract
Its no normal stress and no tangential stress.)). The extra terms needed to satisfy the no-slip condition are more complicated, and form the Blake tensor. But doesn't the reversed mirror-image Stokelet cancel both the normal and tangential components of the velocity at the boundary?? No, because the Stokelet doesn't have the right symmetry, i think Like what? I mean, say a liquid-gas interface doesn't satisfy either no-slip or no normal flow, no?
http://onlinelibrary.wiley.com/doi/10.1002/cpa.3160190405/abstract
Its no normal stress and no tangential stress. However hydrodynamic interactions are not the only contribution. For rotating swimmers, like E. Coli, the effect the wall drag on torque is important; it makes the swimmer move in circles near the wall. See more at Physics of microswimmers—single particle motion and collective behavior: a review.
When limit problem () differs in an important way from the limit ). For example, a root is lost, or a derivative is lost in a DE. Problems that are not singular, are called regular For algebraic equations, often when a root is lost, it's because it goes to as . It's first term in the expansion may be then , for example. For the iterative method, different functions may be needed to find different perturbed roots of an algebraic equation, so that condition as is satisfied. Scale variables so that the problem becomes regular. For instance, if first term in the expansion is , rescale . Indeed, the problem of finding the correct starting point for an expansion, is equivalent to the problem of finding a suitable scaling to regularize the singular problem. Systematic approach:general rescaling Let , with strictly of order as Vary from small to large to identify dominant balances in which at least two terms are of the same order of magnitude as , while others are smaller. Scalings that result on dominant balances are called distinguished limits Alternative approach: pairwise comparison quicker when there are a small number of terms. Try to create dominant balance between terms pairwise, and see if you can get it, consistently. That way you can find the dominant limits. Sitting is a basic human resting position. The body weight is supported primarily by the buttocks in contact with the ground or a horizontal object such as a chair seat. The torso is more or less upright. Sitting for much of the day may pose significant health risks, and people who sit regularly for prolonged periods have higher mortality rates than those who do not. Sloppy is the term used to describe a class of complex models exhibiting large parameter uncertainty when fit to data. the Fisher information matrix (FIM) can be used to estimate the uncer- tainty in each parameter in our model. "Many models in biology, engineering and physics have a very large number of parameters. Often many of these are only known approximately. Moreover, in John von Neuman's famous quip \with four parameters I can fit an elephant, and with five I can make him wiggle his trunk." suggests that only a small set of these parameters are actually relevant? Could there be a fundamental theory of these Complex systems that allows us to work out what the key parameters are?" Perspective: Sloppiness and emergent theories in physics, biology, and beyond
publication Parameter Space Compression Underlies Emergent Theories and Predictive Models Universally Sloppy Parameter Sensitivities in Systems Biology Models Sloppy-model universality class and the Vandermonde matrix A Smale horseshoe map is any member of a class of chaotic maps of the square into itself, of the kind introduced by Stephen Smale in 1967 while studying the behavior of the orbits of the van der Pol oscillator. HORSE SHOES AND HOMOCLINIC TANGLE I read about homclinic tangles when doing the nonlinear systems miniproject on the Duffing oscillator see Thompson and Stewart. Nonlinear dynamics and chaos and here. Whenever a pair of invariant sets (one outgoing and one incoming) of from some saddle fixed point cross in a Poincare plane (they can cross, as they don't represent trajectories), the points in the outgoing set must go outwards in the outgoing set, but the intersection point must also go inward in the the ingoing set. This causes the outgoing set to cross the ingoing set at ever decreasing steps, and causes a shape like that of the Smale horseshoe. This is hard to explain without pictures.. Random graph models capture well the small-world properties of real networks (see Large-scale structure of networks). The mean geodesic distance grows like , that is, much more slowly than , the number of nodes. However, they don't capture the high transitivity (i.e. high clustering coefficient) of real world networks (where nodes which are neighbours of the same node are more likely to be neighbours of each other, specially true in social networks). One can easily construct models with high transitivity, like the triangular lattice, or the "circle model" where each node is connected to closest nodes, but these don't have small-world properties. The small-world model is a hybrid of the two, so that it displays both high transitivity and short path lengths. It was proposed in 1998 by Watts and Strogatz. The model (Watts-Strogatz version) works by rewiring existing edges in a random fashion, becoming so-called shortcuts. Another version (Newman-Watts), that is easier to analyze analytically, doesn't rewire edges, but simply adds them (often, we add one, with probability , per edge in the circle model network). Degree distribution It is a Poisson distribution (in the limit of large I think, right?), just like the random graph. However, it is cutoff at , as we don't remove the original circle-model edges. Clustering coefficient Compute by counting triangles, and triads.. Mean shortest path No exact formula known, but we know scaling of the mean shortest distance, : . which comes from scaling argument...Approximate form for can be found by mean-field methods.
---— One can see that there is a wide range of values for so that the network exhibits both high clustering and small mean shortest distance, showing that these are not at all incompatible. The conclusion from all this is that: Simulating in Matlab This page explains how to simulate the code in Matlab. Statistical physics of social dynamics Influence maximization in complex networks Social contagion processes See Complex contagions Opinion dynamics Study of societies. Societies are complex systems of complex beings; in particular animals, and humans. The behaviour of the individual beings is studied in Behavioural sciences https://en.wikipedia.org/wiki/Social_science See https://en.wikipedia.org/wiki/Social_anthropology for human societies. Social science and engineering is included here. https://en.wikipedia.org/wiki/Community Plotch (watch+play) this: http://ncase.me/polygons/ Sociocyberneering Societal structure (characteristic of Civilization) societal organization: Soft condensed matter (often abbreviated to soft matter) is basically all forms of condensed matter (i.e. many particles more or less bound together (for e.g., by Intermolecular forces)) that isn't a solid, so that it has features that are easily deformable at low energies (room thermal energies).
This includes polymers, Liquid crystals, complex fluids, Granular material, Foams, Emulsions, Colloids, and many kinds of mixtures, that form mesoscopic structures. Also a lot of stuff in life falls under the "soft" category. I like it precisely because of its richness. Wiki: https://en.wikipedia.org/wiki/Soft_matter Typial features Statistical physics is important, in particular interplay of energy and entropy, reflected in the free energy. For systems of many particles, one uses Statistical field theory. Though one can further simplify by ignoring fluctuations, using a Mean field theory. A fruitful way of studying phases, is to study the phase transitions between them. Universality, coarse-graining, renormalization group. Percolation Keylogger Simple keylogger To open the keylogger log (which is very long), get part of the log only, using How to parse the output https://gist.github.com/kelly-ry4n/44822005a02d9ff115c12e4075adb256 See also Programming Performance Engineering of Software Systems Software system engineering Mobile app development The Solar System is the Planetary system containing Planet Earth and the Sun A phase of matter characterized by elastic resistance against deformation. See Condensed matter physics. A solid material is a Material that is Solid at Room temperature. See Simon's solid-state physics book, and his Oxford lectures (recorded). How many watermelons per unit cell? watermelons = atoms.. BTW picture shown isn't really a unit cell, but the same method of counting atoms is used for actual unit cells. A Dispersion (Chemistry) where the dispersed phase has particles in which all dimensions are smaller than approximately one nanometer (so that they aren't colloidal). https://en.wikipedia.org/wiki/Solution a solute is a substance dissolved in another substance, known as a solvent The average length of a code is bounded below by the entropy of the random variable that models your data. See Data compression Breakthrough Starshot aims to demonstrate proof of concept for ultra-fast light-driven nanocrafts, and lay the foundations for a first launch to Alpha Centauri within the next generation. Along the way, the project could generate important supplementary benefits to astronomy, including solar system exploration and detection of Earth-crossing asteroids.
Engineering challenges An spanning cluster-avoiding process (SCA) is an Explosive percolation model based on classifying bonds between those that facilitate the creation of the spanning-cluster, and those that don't, and preferentially selecting those that don't. They are similar to Achlioptas processes (-edge processes). However, they don't require the candidate edges to be chosen at random between any pair of nodes, and instead the candidate edges can belong to a predetermined underlying network, common a hypercubic lattice. They are capable of showing discontinuous transitions, for certain choices of the number of candidate edges chosen per step The most common spanning cluster-avoiding process (introduced here) starts by considering a finite hypercubic lattice in dimensions of size and unoccupied bonds. Then, inspired by the best-of-m model (see Tricritical Point in Explosive Percolation), the rule of the mode is as follows as follows: Getting the Jump on Explosive Percolation Avoiding a Spanning Cluster in Percolation Models These models were introduced to clarify the order of the transition in explosive percolation processes in Euclidean lattices, which had been studied numerically before: Explosive Growth in Biased Dynamic Percolation on Two-Dimensional Regular Lattice Networks – Scaling behavior of explosive percolation on the square lattice. Extensive numerical simulations and theoretical results have shown that the explosive transition in SCA model in the thermodynamic limit, can be either discontinuous or continuous depending on dimension the number of potential bonds (see here, here, and Two Types of Discontinuous Percolation Transitions in Cluster Merging Processes).
A spatial networks is a network that is embedded in some space. This affects our choices of models for random graphs. An example is the Planar network. Explicitly embedded in space vs. consequences of (implicit) system being embedded in space. For example, network of borders of countries vs. friendship network.. Barthelemy's long review (my Kami file, not sure if it'll work: here) Otherwise link to original Empirical observation Two kinds of spatial network topologies: Measure strength, clustering coefficients, and betweeness centrality, and their correlations with degree. Also assortativity. Assortatitvity is flat (i.e no degree-degree correlations) because while often hubs want to preferentially connect to hubs, they can't if spatial constraints don't allow such long (in average) links. Anomalies in betweeness centrality- correlation. Fluctuations (for given degree) because of competition of spatial constraints (that want central nodes close to the spatial network barycenter) and degree. Topology-traffic correlations. Nonlinear correlations between non-topological quantities (like strength and distance strength) and topological quantitiy (degree). A superlinear relation of the strength and degree indicates that links connecting to central (high-degree) nodes carry more traffic than average. Spatial constraints tend to cause this because they tend to reduce the number of high node hubs (as long links are costly). However, if the traffic stays the same, it must be distributed among the lesser-degree hubs, and so the increase of traffic with degree is faster. See page 45 of review. This is seen in strength-driven preferential attachment with spatial selection, in airline networks (and the Newman model that models them), in OTT (optimal traffic tree), Real-world networks Models for spatial networks Geometrical random graphs Spatial generalizations of the Erdos-Renyi graph. Random graph Spatial small worlds. The Watts-Strogatz model in a d-dimensional lattice, and where the probability of making a shortcut may depend on its length (spatial constraint). Spatial growth models. Optimization of spatial networks The geometric form of the tree network is deduced from a single mechanism. The discovery that the shape of a heat-generating volume can be optimized to minimize the thermal resistance between the volume and a point heat sink, is used to solve the kinematics problem of minimizing the time of travel between a volume (or area) and one point. The optimal path is constructed by covering the volume with a sequence of volume sizes (building blocks), which starts with the smallest size and continues with stepwise larger sizes (assemblies). Optimized in each building block is the overall shape and the angle between constituents. The speed of travel may vary from one assembly size to the next, however, the lowest speed is used to reach the infinity of points located in the smallest volume elements. The volume-to-point path that results is a tree network. A single design principle – the geometric optimization of volume-to-point access – determines all the features of the tree network. Mathematics and morphogenesis of cities: A geometrical approach Extracting Hidden Hierarchies in Complex Spatial Networks http://named-data.net/wp-content/uploads/2010HyperbolicGeometry.pdf Hyperbolic geometry http://arxiv.org/pdf/math-ph/0112039.pdf http://www.math.miami.edu/~larsa/MTH551/hyplect.pdf http://www.alcyone.com/max/reference/maths/hyperbolic.html http://eprints.soton.ac.uk/172655/1/2009_PIRT_Barrett.pdf https://www.math.brown.edu/~rkenyon/papers/cannon.pdf http://www.springer.com/gb/book/9789048186365 Spatial growth of real-world networks Evolving Transportation Networks Measuring the Structure of Road Networks Exploring the patterns and evolution of self-organized urban street networks through modeling Time Evolution of Road Networks Granular materials Polymer networks (blue phases..) Fiber networks can amplify stress Roots, vascularity, leaf venation, physarum networks, neural networks... https://en.wikipedia.org/wiki/Outerplanar_graph https://en.wikipedia.org/wiki/Godfried_Toussaint Toussaint hierarchy of different kinds of geometric planar graphs. Has been applied to physarum networks Fourier spectral discretization Finite difference formulas create dispersion effects not found in original PDE. Similar effects seen in crystals, which are discrete by nature. One way to avoid these, is to let the order of the finite difference formula tend to infinity. We then get spectral methods. Simplest favours are: In the limit of infinite order, those finite differences approach the infinite Laurent matrix (or Laurent operator). Suppose we have the values of the solution function on our discrete periodic grid. Spectral approximations to , is given by: where D here is the spectral differentiation matrix. The fundamental idea of spectral collocation methods is : 1. Interpolate the data by a global interpolant (for example, a periodic trigonometric polynomial): 2. Differentiate and evaluate at the grid points. From properties of exponential, another way to compute the 2nd Fourier spectral derivative is: 1. Given , compute its DFT (discrete Fourier transform) 2. Multiply by : . 3. Take the inverse transform Similar ideas lead to the one-way wave equation. Fill details below from lecture 10, when it's published (https://www0.maths.ox.ac.uk/courses/course/28839, and vid). ... Quadrature: trapezoidal rule integrating the interpolant Rootfinding: via eigenvalues of companion matrix ... ...
A more realistic kind of Artificial neural network. It is a model that is the basis for the design of Neuromorphic computing systems. aka Sherington-Kirkpatrick model Disordered version of the Ising model, and corresponding magnetic materials showing disordered phases. A short course on mean field spin glasses solvable model of a spin-glass See also Ising model.. See also Artificial neural network (near bottom) for some cool applications Long-Distance Behaviour of Correlation Functions in Disordered Systems Scale Invariance and Self-averaging in disordered systems Direct moment-moment coupling is too weak to account for the observed behaviour. In a metal such as copper, the outermost
atomic electrons leave the individual copper atoms and more or
less freely roam through the metal (thus becoming conduction
electrons). So, in an alloy like copper manganese, it might be
suspected that these conduction electrons are playing some role.
And that suspicion is correct. Electron spins have two properties that are crucial to their mediation role: Four properties constitute the most prominent static features of materials we have come to call Spin glasses. I.e. non-equilibrium properties. https://en.wikipedia.org/wiki/Spin_glass http://www.birs.ca/events/2014/5-day-workshops/14w5082/videos Courses - F. Guerra “Equilibrium and off equilibrium properties of ferromagnetic...” Statistical mechanics of spin glasses and neural networks 8\3\16 no sound :( The spindle, or spindle apparatus is an structure that segregates chromosomes during cell division, and is formed by Microtubules, Molecular motors, and hundreds of other proteins. The spindle self-organizes during division process. (https://en.wikipedia.org/wiki/Spindle_apparatus) For the frog Xenopus laevis, spindles are on average ~45 microns long, and ~30 microns wide. Microtubules in these spindles have an average length of ~7 microns (ref) and are at a density of ~50-100 microtubules/m^2, implying that there are ~100,000 (ref 1, ref 2). Microtubules are polar polymers whose minus ends are relatively static and whose plus ends polymerize at a speed of ~10-20 m/min (ref). There is no appreciable rate of rescues in these spindles ? (ref), and the half-life of these microtubules is ~16s, much shorter than the typical lifetime of a spindle – which can exist for several hours. Microtubules in the spindle interact with each other via motrs and cross-linkers, and continuously slide toward the poles at a rate of ~2.5 m/min (ref 1, ref 2) Nucleation and Transport Organize Microtubules in Metaphase Spindles Microtubule Plus-End Dynamics in Xenopus Egg Extract Spindles The kinesin Eg5 drives poleward microtubule flux in Xenopus laevis egg extract spindles. Although mitotic and meiotic spindles maintain a steady-state length during metaphase, their antiparallel microtubules slide toward spindle poles at a constant rate. This "poleward flux" of microtubules occurs in many organisms and may provide part of the force for chromosome segregation. [...] Our results suggest that ensembles of nonprocessive Eg5 motors drive flux in metaphase Xenopus extract spindles. See Active matter Spindle self-organization arises from: Microtubules in the spindle are deep within the nematic phase, as their volume fraction, , is well above the volume fraction at which the isotropic phase is expected to lose stability, . However, their net polarity varies from parallel (with plus end towards center) at the ends, to antiparallel at the middle. Theory: The magnitude of the nematic field is taken to be constant throughout the spindle (note: the magnitude, not the direction!), while the magnitude of the polarity field depends on motor activity and self-advection. They do this because they consider the simplest theory that is consistent with all the data. See Supporting infomation (annotated) Theory based on that developed in this paper:
Fluctuating hydrodynamics and microrheology of a dilute suspension of swimming bacteria. Some parts can be derived using Poisson-bracket approach to the dynamics of nematic liquid crystals. How changes in volume due to microtubule polymerization (gaining the dimers) can also add to active stress, as in the case of cells growing in tissues: Fluidization of tissues by cell division and apoptosis Materials and apparatus LC-PolScope, http://openpolscope.org/. Type of microscope that uses light polarization. Metaphase arrested spindles assembled in Xenopus laevis egg extracts. Measurement methods LC-PolScope + Image processing -> extract spatio-temporal correlation functions from the movies obtained by microscope. Measure: Spinning disk confocal microscope, to record 3D time-lapse movies of spindles labeled with high concentration of fluorescent tubulin. These give 3D measurements of the density. See video Measuring stress fluctuations: obtained two-point particle displacements by tracking single molecules of fluorescently labeled tubulin, computed the two-point correlation between these single molecules along the direction perpendicular to the spindle axis. Measuring correlations. In particular, they measure correlations of the fluctuations at each pixel in the image relative to the time-average value of that pixel. This is so that the correlations don't contain information on the more or less steady average spatial structure of the spindle, and so we focus on the fluctuations on top of it. The Fourier transform of an autocorrelation gives the Power spectral density (PSD), which they use to compare predictions with experiment. They also use these comparisons to fit the parameters of the theory, as is done in many instances in Condensed matter physics, as they point out. They also show that their parameters are relatively few, showing strong predictive power of the theory, and also meaning that the agreement with experiment is strong validation of the theory. Measurement results: These are all are consistent with the theory, as can be seen in the figure below: The calculated orientation of microtubules throughout the spindle quantitatively agrees with their LC-Polscope measurements. They reproduced the observed spatial variation of polarity Calculated aspect ratio closely agrees with observation Other spindle phenomenology to further investigate using the above theory: Nonequilibrium mechanics of active cytoskeletal networks. Microrheology, Stress Fluctuations, and Active Behavior of Living Cells.
We report: The {[fluctuations]’ spatial and temporal correlations} indicate that {the cytoskeleton can be treated as a {course-grained continuum with power-law rheology, driven by a spatially random stress tensor field}}. {Combined with recent cell rheology results, our data} imply that {{intracellular stress fluctuations have a nearly power spectrum}, as expected for a continuum with a slowly evolving internal prestress.} A spectrum corresponds to a linear decay in time of a stress-stress correlation function (see WA computation, notice dividing by is like integrating the Fourier transform) within our experimental time window, and would be a natural consequence of slow evolution of intracellular stress. Explanation: The stress generation/relaxation may rely on a number of modes with diverse timescales, . In the simplest case, a stress autocorrelation would then be multiexponential, consistent with our result if all lie well outside of our measurable range. This is because the exponentials appear linear when the exponent . Four properties constitute the most prominent static features of materials we have come to call Spin glasses. Dilute magnetic alloys at higher concentrations of magnetic impurities were the first experimental examples of spin glasses. Because the spins interact, it was expected the system would have some sort of ordered phase at low temperatures. Indeed a Phase transition was observed, with a susceptibility cusp at a particular transition temperature . The high temperature phase was a paramagnetic phase. Then experiments on the nature of the lower temperature phase were conducted. There exists a variety of experimental probes that can provide information on what the atomic magnetic moments are doing, and measurements using these probes indicated several things. However, the phase transition had some more surprises to reveal. Recall that at a phase transition, all the thermodynamic functions behave singularly in one fashion or another. Surely the specific heat, one of the simplest such functions, should show a singularity as well. However, when one measures the specific heat of a typical spin glass, one sees . . . absolutely nothing interesting at all. All you see is a broad, smooth, rounded maximum, which doesn’t even occur at the transition temperature (defined to be where the susceptibility peak occurs). A typical such measurement is shown in figure 4.2. So, returning to the topic at hand, we’re faced with the follow-
ing question: Is there a true thermodynamic phase transition to a
low-temperature spin glass phase characterized by a new kind of
magnetic ordering? Or is the spin glass just a kind of magnetic
analog to an ordinary structural glass, where there is no real
phase transition and the system simply falls out of equilibrium
because its intrinsic relaxational timescales exceed our human
observational timescales? If the latter, then the spins wouldn’t
really be frozen for eternity; they would just have slowed down
sufficiently that they appear frozen on any timescale that we can
imagine. As of this writing, the question remains open. A statistical field is often derived by averaging microscopic physics over mesoscopic lengthscales (in a particular way called, coarse graining). This results in a free enegy, , which (when exponentiated) gives the weight factor over which we integrate to get the partition function, . As the averaging gives a (macroscopic) field (as an approximation to a lattice average), the integral for is a Functional Integral, expressable as a Path Integral. This free energy can be written as a power series in the field. It turns out that only a few terms (the renormalizable ones, and maybe a few non-renormalizable ones) contribute for a given precission of interest (this is understood via the Renormalization Group). Thus, the only thing that fundamentally differentiates one theory/model from another are the symmetries of the field, which determine which terms can appear in the free energy. Dimensionality and transformation properties of the field (whether it is a scalar, a vector, a spinor, ...) also play a role. The microphysics only enters through the parameters of the theory. But as these are often few, they can be and most often are determined experimentally. For this reason statistical field theories are often referred to as phenomenological. Similar considerations apply in Quantum field theory Assumptions: Statistical physics deals with the description of systems for which a deterministic description is either useless or impossible, so that one uses a statistical description. Here a deterministic description is understood in the context of the relevant physical description. For example Schrodinger's equation is deterministic, if the relevant physical description is the wavefunction. It is non-deterministic if one takes position and/or velocity as the relevant physical descriptions. However, it is known that one can't describe quantum mechanical evolution purely with a statistical theory of position and velocity, without sacrificing some rather well-established physical principles or predictions. If the system is effectively classical (either because it is macroscopic, or for some other reason, that is probably ultimately related to Quantum decoherence), the need for a statistical description arises when the system is sufficiently chaotic. Most often this requires the system to: have many components and/or be coupled to a system with many components. For this reason, statistical physics is mostly applied to the description of systems of many particles in a gas, liquid or solid; or to one or a few particles coupled to one such large system. There are two main branches of statistical physics: Equilibrium statistical physics deals with such systems at equilibrium, that is, when the relevant macroscopic averages of the statistical description don't change with time. In practice, one often has two approaches: Non-equilibrium statistical physics deals with such a system out of equilibrium, so that averages can change in time. This is much harder to do in full generality, as systems offer much more diversity out of equilibrium, as may be expected. One often has three approaches: See also Complex systems, and Sloppy systems Entropy, Order Parameters, and Complexity Long-range interacting systems Bangalore School on Statistical Physics - V (video lectures) Bangalore School on Statistical Physics - VI (I'm on the 1st lecture on Long-range interacting systems See about disordered systems in Condensed matter physics, as these are interesting systems studied using statistical physics. Indian Statistical Physics Community Meeting 2016 Interesting papers on statistical physics and complex systems Non‐equilibrium thermodynamics: foundations, scope, and extension to the meso‐scale Non-equilibrium thermodynamics - de Groot and Mazur Statistical Mechanics II course Sethan's Statistical Mechanics: Entropy, Order Parameters, and Complexity MIT 8.333 Statistical Mechanics I MIT 8.334 Statistical Mechanics II Statistical physics, Optimization, Inference and Message-Passing algorithms Foundations of statistical mechanics What Is a Macrostate? Subjective Observations and Objective Dynamics The Backwards Arrow of Time of the Coherently Bayesian Statistical Mechanic Ludwig Bltzmann and entropy Lots os stuff about entropy.. Philosophy of statistical physics Probability in physics: stochastic, statistical, quantum Book: Ensemble modeling : inference from small-scale properties to large-scale systems Mathematica foundation: Probability theory statistic t-test Mathematical statistics Mathematical Statistics Videos some YB videos Self-Motile Colloidal Particles: From Directed Propulsion to Random Walk (experiment) Anomalous Diffusion of Symmetric and Asymmetric Active Colloids At times long compared to the rotational diffusion time, rotational diffusion leads to a randomization of the direction of propulsion, and the particle undergoes a random walk whose step length is the product of the propelled velocity V and the rotational diffusion time, leading to a substantial enhancement of the effective diffusion coefficient Links Notes on Nonequilibrium StatPhys MT2015 Oxford (mostly stochastic processes) Discrete Stochastic processes MIT course Stochastic processes MIT notes Nice notes on applications of stochastic processes List of stochastic processes topics Stochastic processes Martingales, Martingales Through Measure Theory All these generally are Markov processes https://en.wikipedia.org/wiki/It%C3%B4_calculus Chemistry Oscillating chemical reactions Biology General phenomena Others Recent paper by Ramin Golestanian (26th Feb 2016): http://pubs.acs.org/doi/pdf/10.1021/acs.nanolett.5b04372 on power spectrum for electric-field-driven ion transport through nanopores. Apparently Pink noise (noise that has power law power spectrum, instead of flat, as for white), is common place in situations with electric fields, and underlying mechanism not totally understood. https://en.wikipedia.org/wiki/Point_process Stochastic processes with JS: https://www.npmjs.com/package/stochastic A string, in Computer science, Information theory, and Mathematics, is a Sequence of symbols, where each symbol is a member of a given set, called the alphabet. Strings often refer to finite sequences. See here. These constructions are useful in Mathematics and Computer science. Strings in computer science In computer science, strings are one of the fundamental Data types used in Programming. In this case, the symbols are called characters. However, a string can also be considered as a Data structure Training data consisting on inputs and outputs.
Other names for inputs: predictors, independent variables, features. Other names for outputs: responses, dependent variables. In supervised learning, we want to find function relating inputs to outputs, to then be able to predict new outputs from new inputs. Need a way to represent the function approximation, with some parameters (the model). Some example of models: and a learning algorithm to find best parameters for the data, so that the model can predict well. See Learning theory. New paradigm: Deep learning Generative vs discriminative models Learning the function . See notes Output value is continuous, and quantiative (i.e. it has an ordering, and a notion of closeness (matrix)). Output value is discrete, or categorical, or qualitative. No implicit ordering, or closeness on the variables. Simple approach: Logistic regression Artificial neural network (see Deep learning) Learning the function , which can be used to find using Baye's theorem.
See notes Variance. How much the model varies with fluctuations of the training data, i.e. how stable is it. Bias. How many assumptions the model imposes, i.e. how flexible is it. Well that's maybe only one way to look at it.. Test the model on data you haven't used for training. min-max, average https://www.cs.cmu.edu/~schneide/tut5/node42.html Wikipedia has good explanations: https://en.wikipedia.org/wiki/Cross-validation_(statistics) One can show (maybe technical details I don't know..) that given the real distribution of the data, and a sample used for training, one is likely to underestimate the error. So I think cross-validation can be shown rigorously to be good for assessing a model's predictive power (i.e. probability of predicting rightly). See Elements of Statistical Learning book for all details.. It is a way to find out if you are overfitting Related: https://en.wikipedia.org/wiki/Testing_hypotheses_suggested_by_the_data A method for discriminative Supervised learning, that is for Supervised classification, and Regression analysis. Surface science is the study of physical and chemical phenomena that occur at the interface of two phases, including solid–liquid interfaces, solid–gas interfaces, solid–vacuum interfaces, and liquid–gas interfaces. It includes the fields of surface chemistry and surface physics. See Materials science, Condensed matter physics, Chemistry https://en.wikipedia.org/wiki/Surface_science See Colloid Transport by Interfacial Forces Fluid/fluid interfaces Governed mostly by (apparent) discontinuities in stress, particularly surface tension. These are known as "Marangoni effects", or "capillary-driven flow". Solid/fluid interfaces Governed mostly by slip velocity at the interface. These are responsible for several of the Phoretic mechanisms of colloids, which cause them to move along gradients of some quantity. An effect, where effectively large neutral spaces are also favoured, but in equilibrium, not out of equilibrium as in the Arrival of the frequent See comments on Arrival of the frequent, for more comparisons. Original paper: Evolution of digital organisms at high mutation rates leads to survival of the flattest
A suspension is a dispersion of solid particles in a liquid (IUPAC definition). For the particles to be definable as solid, they must have at least some size, and thus a suspension requires particles of colloidal size, or larger. Some authors, use suspension to refers to those suspensions where the particles are large enough to sediment. The case for smaller particles (like colloidal particles) may then be called Sol (colloid). A symbolic dynamical system often results from the partitioning of the state space of a general Dynamical system. https://en.wikipedia.org/wiki/Shift_space http://www.math.harvard.edu/library/sternberg/slides/symbolic.pdf : http://www.scholarpedia.org/article/Symbolic_dynamics See Fractal, Complex systems The symbolic method of Analytic combinatorics, applied to unlabelled structures. It uses the ordinary generating function. Elementary identity: , where is the number of objects of size The number of rooted ordered trees of nodes is the th Catalan number. Can derive GF by using the fact that "a tree is a node and a sequence of trees". See here. Can easily extend to binary trees, as done in video Trees have been related to other combinatorial structures: gambler's ruin sequences, context-free languages, triangulations, ... The symmetric property, or just symmetry, in Set theory, is a property of a binary Relation on a Set : Discrete symmetry breaking Continuous symmetry breaking. Goldstone theorem –I think: A matter of time-scales?? (Lecture Notes in Artificial Intelligence volume 5777) Kampis, Karsai, Szathmáry-Advances in Artificial Life_ Darwin Meets von Neumann, Part 1(2011) Membrane properties. Protein pores: transport polymers across membranes. Often they have to unfold and fold. Stocastic sensing. SIgle-molecule chemistry Protein engineering, chemical synthesis, biophysical methods alpha-Hemolysin protein pore. How can a water souble protein assemble into a transmembrane pore? 3D droplet networks are tissue-like materials. aqueous droplets networks. Synthetic biology Woolfosn Bromley 2011.
Nice diagram Synthia ...completelly synthetic cells. Protein components for nanodevices, Bayey, et al. Droplets water in an oil form monolayer, but two of tese wil tend to come togehter and forma bilayer. There is a force that attracts them. Probably kinetically stable. Lipid coated hydrogels as components... The 7R "diode" Folding dropplet networks using osmolarity, different salt concentrations in eac. Soft robots. Light for sensing, power generation and patterning.. Bacteriorhodopsin:light-driven proton pump. https://autodeskresearch.com/groups/bionano http://www.nanalyze.com/2016/03/3-companies-building-nanorobot-factories/
Foundations of Computational and Systems Biology Stochastic Dynamics for Systems Biology book The mycobiome The largely overlooked resident fungal community plays a critical role in human health and disease. Systems approaches to modelling pathways and networks
It has become commonly accepted that systems approaches to biology are of outstanding importance to gain understanding from the vast amount of data which is presently being generated by advancing high-throughput technologies. See Non-equilibrium statistical physics and Complex systems Stochastic approaches in systems biology. See Systems biology Differential Equation Models for Systems Biology: A Survey Winter School on Quantitative Systems Biology 2015 ICTP-ICTS Winter School on Quantitative Systems Biology Information processing in biological systems Statistical mechanics for real biological networks by William Bialek: Turing Lecture (Part 2) Reading and writing omes - George M Church Harvard Molecular Technologies Systems biology uses many tools from Mathematical biology http://www.sysbiodtc.ox.ac.uk/ Application for admission as a graduate student to the University of Oxford Academic conditions Achieve the EPSRC minimum of a 2:1 classification in your current programme of study and
provide a hard
-
copy original or certified copy of your final transcript
Once you have met the condition above, please inform us as soon as possible by sending
the relevant official documentation to the address a
bove. We need to receive this information
by
31 August
2016 As I will have the Degree ceremony (MMathPhys) on September, the proof I need to send is a Degree confirmation letter (see here) A place for the EPSRC Systems Biology Doctoral Training Centre beginning 3 October 2016 Completion of Conditions letter If you satisfy all the conditions set by both the department and the college in their offer letters, you will be sent a final letter by your department confirming your place. I expect during summer. See MMathPhys oral presentation, and GKeep for topic ideas. For DPhil period. Potential supervisor is Ard Louis. This offer includes full funding in the form of a prestigious EPSRC Studentship which covers
both University and College fees, and also includes a stipend award to cover living expenses
(£14,057 per annum at current rates). This funding covers the entire four year duration of
the programme. More details about how I will receive the scholarship? When, how. See email: You do not need to worry about the studentship - we pay the students on behalf of the EPSRC. You will receive an email from the Finance Officer before you arrive asking for bank details, but the first payment will be given to you as a cheque on your first day here. Kellogg College Essential Information for Offer Holders 2016-17 you will need to be in College in time to
attend induction events for graduate students, which are expected to begin on 3 October
2016. A wider range of welcome events will run from 23 September 2016.
Your departmental
induction programme may start on a slightly different date, so you will need to arrange to be in
Oxford in time for whichever begins first. Non-equilibrium statistical physics Nanotechnology and Artificial intelligence, as well as ways of combining them (see section in nanotech tiddler) Interdisciplinary sciences Mostly about systems, synthesizing, going beyond the reductionism and analysis of the basic foundational sciences. List of systems science journals Cybernetics Principia Cybernetica Electronic Library http://www.emeraldinsight.com/loi/k Philosophy http://www.vub.ac.be/CLEA/dissemination/groups-archive/vzw_worldviews/ Taxis refers to a behavioural response by an organism to a directional stimulus or gradient of stimulus intensity. See Phoretic mechanisms of self-propelled colloids for similar mechanisms in simpler active colloid systems. Taxonomy: Life's Filing System - Crash Course Biology #19 Homologous traits Binomial nomenclature groups of organisms Domain Kingdom Phylum Class Order Family Genus Species Technologies are pieces of Art with very clear purpose, and thus must use the more rigorous methods of Science. The purposes of technologies is often to extend what we can do. Engineering is the art and science of making new technology. http://www.deepknowledgeventures.com/ Other innovation areas Food innovation: see Food
A textile, or cloth, is a kind of flexible Composite material material consisting of a network of natural or artificial fibres (yarn or thread). http://www.jstor.org/stable/25221013?seq=1#page_scan_tab_contents The treatise of Walter de Milimete See El mundo fisico, de Guillemin, Phonurgia nova, de Athanasius Kircher, Secreta secretorum, etc. https://web.stanford.edu/group/kircher/cgi-bin/site/?attachment_id=679 "Know thou, moreover, that the people aforetime have produced things which the contemporary men of knowledge have been unable to produce. We recall unto thee Murtús* who was one of the learned. He invented an apparatus which transmitted sound over a distance of sixty miles." http://bahaiasheboro.blogspot.co.uk/2010/05/know-thou-moreover-that-people.html
See MMathPhys oral presentation The structure of the genotype–phenotype map strongly constrains the evolution of non-coding RNA Non-coding RNA (ncRNA) is RNA whose function is not to encode information. It's function may then be structural, or catalytic for instance, and is most often determined by its secondary structure, which is then the phenotype of interest. The distribution of properties found in ncRNA in nature (from fRNAdb database) closely follows that obtained by G-sampling (uniform sampling over genotypes). Due to the bias in the GP map, this sampling is very different from P-sampling (uniform sampling over phenotypes). The strong bias makes certain structures appear much more often, which has been called convergent evolution in Evolution (part of the general phenomenon of homoplasy). An example is the ubiquity of the hammerhead ribozyme through all the kingdoms of life. Figure 2.
Comparison of P-sampled and G-sampled distributions to natural data for L = 20 RNA. The P-sampled PP(Ω) (red diamonds) measures the probability distribution for a phenotype to have a given NS size Ω. It differs markedly from G-sampled PG(Ω) (blue circles), generated by random sampling over genotypes. Error bars arise from binning data. The black and cyan lines are theoretical approximations to PP(Ω) and PG(Ω), respectively (see Methods). The probability distribution of Ω for the SSs all 7327 (non-trivial) L = 20 sequences for Drosophila melanogaster from the fRNAdb database [21] (green squares) is much closer to the G-sampled PG(Ω) than to the P-sampled PP(Ω). Inset: all 11 218 SS phenotypes (purple triangles) ranked by NS size Ω. There is strong bias, just 5% of phenotypes take up 58% of all genotypes. The 7327 natural data points (green squares) are clustered at lower rank (larger Ω). (Online version in colour.) The number of 'relevant structures' can be estimated by the entropy of the G-sampled distribution of features (for instance belonging to a certain binned interval of neutral space size, or number of stacks (sets of contiguous base-pairs)), as . One can define the bias ratio as the ratio of to the total number of phenotypes. Within these relevant structures which arrive during evolution, natural selection still acts, and can be seen for example in the higher stability of natural RNAs vs random G-sampled RNAs. We find that the natural RNAs have slightly more bonds than in G-sampled structures. The bias towards larger also leads to structures with larger mutational robustness (see Robustness and Evolvability in Living Systems and From sequences to shapes and back: a case study in RNA secondary structures). Larger
robustness is considered to be advantageous [6], so that, in
this important way, phenotype bias facilitates evolution.
The high robustness, however is found both in G and P-sampling because of the high genetic correlations (genes tend to be close in the mutational network to other genes that produce the same phenotype). The genetic correlations are high enough to produce giant connected components (see Natural Selection and the Concept of a Protein Space). "Bias
means that it will be difficult for evolution to find L ¼ 55 structures
with a large number of stacks, again raising the question
of what kind of functionality is possible in principle that cannot
be reached by evolution because of such phenotype bias
constraints?" Understanding tip: The line in figure 4 is flat when there are a lot of phenotypes because there are a lot of phenotypes with the same , and the phenotypes are equally spaced in axis in rank plot. The results that G-sampling produce the same results as the database indicate that some property similar to ergodicity may be at play. G-sampling is an ensemble average, and the database shows a kind of time-average over evolutionary trajectories. However, the process cannot be totally ergodic because evolution is a nonequilibrium process, and effects like long waiting times and the Arrival of the frequent are examples of non-ergodic non-equilibrium effects. The GP map bias is an example of biases in development or other internal
processes could strongly affect evolutionary outcomes. These have been controversial; however, RNA SS provides perhaps the clearest and most unambiguous
evidence for the importance of bias in shaping
evolutionary outcomes. See Homoplasy for discussion on the relation to convergent and parallel evolution. Our ability to make detailed predictions about evolutionary
outcomes as well as counterfactuals for RNA may also
shed light on Mayr’s famous distinction between proximate
and ultimate causes in biology (See Cause and effect in biology and Proximate and ultimate causation). Not sure about this, or if I understand it.. The GP mapping
constraint has some resemblance to classical morphogenetic
constraints which also bias the arrival of variation [47]. But it also differs, because the latter are conceptualized at the level of
phenotypes and developmental processes, and may have been
shaped by prior selection, whereas the former constraint is a
fundamental property of the mapping from genotypes to phenotypes
and was not selected for (except perhaps at the origin
of life itself Still, maybe most possible GP maps have this property anyways (see experiments with transducers)). Finally, strong phenotype bias is also found in: suggesting that some of the results discussed in this paper for
RNA may hold more widely in biology See also Evolving automata Paper with several examples of GP maps, including cellular automata map: An investigation of redundant genotype-phenotype mappings and their role in evolutionary search For this see: Exploring the repertoire of RNA secondary motifs using graph theory; implications for RNA design. tree graphs to describe RNA tree motifs and more general (dual) graphs to describe both RNA tree and pseudoknot motifs. our graph theory approach to RNA structures has implications for RNA genomics, structure analysis and design. Experimental fitness landscapes to understand the molecular evolution of RNA-based life
In evolutionary biology, the relationship between genotype and Darwinian fitness is known as a fitness landscape. These landscapes underlie natural selection, so understanding them would greatly improve quantitative prediction of evolutionary outcomes, guiding the development of synthetic living systems. However, the structure of fitness landscapes is essentially unknown. Our ability to experimentally probe these landscapes is physically limited by the number of different sequences that can be identified. This number has increased dramatically in the last several years, leading to qualitatively new investigations. Several approaches to illuminate fitness landscapes are possible, ranging from tight focus on a single peak to random speckling or even comprehensive coverage of an entire landscape. We discuss recent experimental studies of fitness landscapes, with a special focus on functional RNA, an important system for both synthetic cells and the origin of life. Methods Computer science is what came out of asking: what kind of maths can actually be effectively carried out in the physical world? Theoretical computer science, looks at the more theoretical (as opposed to applied) aspects of this question. The nature of computation by Moore and Mertens (looks like a nice book). Good reads page and Amazon page Structure and interpretation of computer programs Companion site Functional programming http://learnyouahaskell.com/introduction Higher-order functions. Composition. Examples in JS: .filter, map, reduce https://www.youtube.com/watch?v=2jz0ugqghys http://research.cs.queensu.ca/home/akl/cisc879/papers/PAPERS_FROM_MINDS_AND_MACHINES/VOLUME_13_NO_1/V23L84X656370574.pdf This gets quite philosophical of course https://www.youtube.com/watch?v=92WHN-pAFCs Computation is the part of maths that can effectively be carried out in the world Computation is often studied via mechanistic models like those formalized in Automata theory. The main models will be explained below, in Models of Computation section. A formal language is a set of strings of symbols that may be constrained by rules that are specific to it. These rules can also be expressed as machines, like finite state machines or Turing machines. Finite state machine<Context-free languages<Turing machines<Undecidable problems (hypercomputation) See Chomsky hierarchy in Formal systems and semantics https://www.youtube.com/watch?v=ZNBNmxXKmUY&index=7&list=PL601FC994BDD963E4. On Lect 3 part 2/10 Computability of functions See also Automata theory for more. Theory of Computation - Fall 2011 (Course) Theory of Automata, Formal Languages and Computation lect 1 Introduction to computability theory See Clusters, asters, and collective oscillations in chemotactic colloids for more details. See also Phoretic mechanisms of self-propelled colloids, Collective behaviour of active colloids, Diffusiophoresis, and Designing phoretic micro- and nano-swimmers. Use normal flux boundary conditions for the Diffusion of the concentration of product () and substrate (), as done in Concentration around a self-diffusiophoretic particle. Michaels-Menten reaction rate (see Enzyme kinetics). Number conservation for the products and substrates, and the assumption that s and p diffuse rapidly compared to the colloid so that time dependencies and advection by flow [ 41 ] can be ignored give: where is the background substrate profile. We thus need to solve for just one of the two concentration fields. This equation comes from the condition that, after reaching the stationary state (assumed fast, by molecules diffusing fast), the flux of products out should equal the net flux of substrate in, i.e. (where is the concentration, see here and here). Now integrate w.r.t. over the boundary layer (assumed to be very thin, of size , the radius of the colloid) to get . Now the concentration of outside the boundary layer is assumed to be very small, while that of is fixed to . We thus recover the above equation. Because the boundary is very thin and change approximately linearly within it, and the above equation can be interpreted as simply a "discretization" of the equation with derivatives, which actually holds just at the surface. Note that solution of diffusion equation at stationarity in 1D is linear, which helps justify this under the thin boundary approximation. We work first in the linear regime which refers to the limit . Here, is the Michaelis constant, and this regime corresponds to the case where the rate of catalysis is linearly proportional to the substrate concentration (see Enzyme kinetics). This regime is also called unsaturated. Later we look at the saturated regime. See Collective behaviour of active colloids The resulting slip velocity (see Diffusiophoresis) of the fluid at the surface of the colloid (due to to the interaction of the surface with both substrate and products), leads, for spherical colloids, to an angular () and linear () velocities: Again see Diffusiophoresis, these are derived from the reciprocal theorem. These can be expressed in terms of coefficients related to the spherical harmonic coefficients (we only include the first few) of the surface activity , and motilities and (see Diffusiophoresis): The coefficients , etc. take into account the external substrate gradient directly, as well as the effects that the external substrate gradient has on the gradient of products produced by the particle. Essentially, the different Phoretic mechanisms of self-propelled colloids correspond to responses in either or to the external gradient, through different spherical harmonic components. ... if either or contain all odd or all even harmonics there is no reorientation in response to the gradient (). From calculations we find explicit examples of the general design tip: slip velocity is maximum when the position where is maximum coincides with the region where changes most rapidly. To see more about design considerations see Designing phoretic micro- and nano-swimmers. https://en.wikipedia.org/wiki/Thermodynamic_equilibrium Thermodynamic equilibrium, no net currents (detailed balance) Linear response theory, deals with near equilibrium systems, where averaged quantities either don't change, or change very slowly, I think. Currents may be non-zero in either case. Kubo formula. Read more here. When two liquids are miscible in all proportions at high temperature, but separate into two distinct phases when the temperature is lowered. The Mean field theory for this situation is the regular solution model. This describes the thermodynamics (i.e. equilibrium properties) of the phase separation. The kinetics (i.e. non-equilibrium properties/dynamics) of phase separation are described here. The important quantity is the volume fraction, , proportional to the probability to find a particle of type A or B at a given point, which may in principle depend on space. To begin with, we assume it doesn't depend on space, and we assume that the probabilities for neighbours are independent (mean field approximation). A way to think about this more precisely is imagining all and each of the configurations for unlabelled particles (with finite volume) in a fluid. Now, assume all of these are equally probable, with probability . Now, for each of these spatial configurations of unlabelled particles, imagine all the possible ways of labelling the particles with A or B. In particular, we assume that for each of these configurations, the labelling of each of the particles is an independent random event, and for each particle there is probability of labelling it A, and probability of labelling it B. This doesn't fix the total numbers of A and B, but for large numbers it approximately does so, with errors of . Within this approximation we also have (where is the average number of species ), so that we may call a concentration. We could do it fixing the number of particles of each species, but it's more cumbersome, and not really correct for the case where s vary in space (because when varies in space, we don't assume the numbers are fixed, but only the chemical potentials, and thus the average numbers). If one fixes the number of the species, though, one can approach it as it's done in the derivation of the Flory-Huggins theory in Doi's polymer physics book (to see some notes on an extension to the continuous Gaussian chain, instead of the lattice model). More importantly, these probabilities are not right because nearby particles are going to interact in our model, so there will be correlations in positions induced by the Boltzmann factors depending on the energies. This is where we make the mean field approximation. We ignore these correlations and assume the probability distributions at each site are independent! By decomposing the possible states in this way we have for the entropy ( is set of unlabelled arrangements): where we used the properties of Binomial distributions and that , as there are no other types of particles. Ignoring constants, the entropy per particle is: We can write the energy per particle too. We define energies for AA, BB, and AB pairs. We assume, following our mean field approximation that there are a number of A neighbours equal to the expected number of neighbours given by the above scheme, i.e. , and similarly for B, where is the expected number of neighbours, not caring about label. After some algebra this gives a free energy: . where depends on the strength of the interaction energies relative to . This curve has one minimum for high T and two minima for (where we consider as a function of say). When there's one minimum, the system will in general not reach it because is fixed, and it can be seen geometrically (see soft matter Jones book) that when the curve has positive curvature, then any phase separation will be unfavorable. However, when the two minima appear, it is favorable. Phase separation refers to a system where there are different spatial regions in the volume of the system with different values for the order parameter, in this case related to . The curve corresponding to the most favourable concentrations that will coexist in the different regions for the phase separated mixture is called the coexistence curve, or the binodal. These most favourable concentrations are the ones that when a line is drawn through their corresponding values of F in the curve, the intersection with the line , the initial concentration, is lowest. See Fig 1.a. This (if there are no degeneracies) can be found by the double-tangent construction: by finding a straight line that is tangent to the curve at two points. This condition is derived as follows: Analyzing the free energy curve and realizing that the separation process is continuous (not a sudden jump), one realizes that depending on where the initial concentration begins, the separation is locally stable or locally unstable (i.e. metastable). This depends on the curvature of the curve as seen in figure 2. As usual the metastable will have a time-scale for overcoming the barrier (exponentially dependent on hight barrier. c.f. Kramers rate theory) The curve that separates these two regimes, i.e. where , is called the spinodal. A good point to remember is that , in the simplest case depends on temperature as , but often the energies of interaction we used in it have entropic contributions, so the temperature dependence is more complicated. Topography is the study of the shape and features of the surface of the Earth and other observable astronomical objects including planets, moons, and asteroids. A Topological dynamical system consists of a Topological space (e.g., a Metric space) , and a continuous map . In dynamical systems, complexity is usually measured by
the topological entropy and reflects roughly speaking, the proliferation of periodic
orbits with ever longer periods or the number of orbits that can be distinguished
with increasing precision. See the related Kolmogorov-Sinai entropy Hans Henrik RUGH - The Milnor-Thurston determinant and the Ruelle transfer operator For a coarse-grained Dynamical system, described by a transition graph, in turn described by an Adjacency matrix , then the topological entropy is where is the maximum eigenvalue of (assumed to be a positive matrix so that Perron-Frobenius applies). A topological space is a Set , with a collection of distinguished Subsets called Open sets, called the topology of the set. These must satisfy: An equivalent definition is that a topological space is a Neighbourhood space in which, for all and for all , there exists such that, for all . It can also be shown that: A neighbourhood space is a topological space if and only if each Filter has a Filter base consisting of Open sets. Remark: for family of subsets of a set , there exists a unique 'smallest' topology on for which is a subbase: namely that topology whose open sets are defined to be all arbitrary unions of the collection of all finite intersections of elements of . The set of open sets in a topology forms a lattice, where the partial ordering is set inclusion. Also the set of topologies on a set can also be equipped with a natural lattice structure. In a topological space one can define fundamental notions of: These are approached using neighbourhoods of a point, which are just open sets that contain that point. The family of neighbourhoods The topological trace formula is a Trace formula for Topological dynamics. See here and here. Also here. Here if the prime cycle exists ( being its length, and is otherwise. http://www.chaosbook.org/course1/Course2w9.html This formula has uses for deriving a formula for the Topological entropy Topos is a category that behaves like the category of sheaves of sets on a topological space (or more generally: on a site). Topoi behave much like the category of sets and possess a notion of localization; they are in a sense a generalization of point-set topology.[1] The Grothendieck topoi find applications in Algebraic geometry; the more general elementary topoi are used in logic. A topos is a category with: A) finite limits and colimits, B) exponentials, C) a subobject classifier. Higher topos theory A total ordering is a binary Relation in a set , defined as a Partial ordering, , such that for any either or . The set is then said to be totally ordered. A trace formula relates the spectrum of eigenvalues of an operator - for instance, the transition matrix - to the spectrum of periodic orbits of a dynamical system. See here. and Topological trace formula Transitivity|Transitivity (Graph theory) (a property of mathematical relations) in a network is usually applied to the relation "is connected by an edge". So a network is transitive if for every u connected to v and v connected to w, then u is connected to w. It's not hard to show that a perfectly transitive network can only have components that are fully connected, or cliques. To be useful for real networks, we talk about partial transitivity, or the level of transitivity in a network. A way to quantify this is by measuring the number of paths of length-2 that are closed (closed here meaning that there is an edge that connects the beginning and ending vertices) compared to the total number of length-2 paths. This is because three vertices in a path of length-2 (a.k.a connected triple) would form a triangle (also known as closed triad) if transitivity holds for them. One can then define the clustering coefficient, , to be the ratio of these two quantities, as a measure of "how often" transitivity holds in the network: where the 6 and the 3 come from counting the number of length-2 paths starting at the three different vertices of the triangle, where we count the two different directions (6) or not (3). This factor is cancelled by the fact that by definition there are twice as many length-2 paths as connected triples because connected triples don't take direction into account, while length-2 paths do. This last definition is the most common, and can be interpreted as the number of people with a common friend (connected triple) that are also friends (so that they form a triangle). Another way to define a clustering coefficient would be to average the local clustering coefficient over all nodes. This quantity is defined, for node as: which is defined when the degree . For smaller degree, we can define . The average over this (over nodes in the network), , then defines also a global measure of transitivity, and was proposed by Watts and Strogatz. It often tends to be dominated by networks with low degree, as the denominator of is large. Furthermore, one can extend the definition of the clustering coefficient beyond simple transitivity, to include the probability that friends of friends of friends are also your friends, and so on. This is equivalent to consider quadrilaterals, pentagons, and other more general motifs. apart from triangles. Triangles are often interesting because they are the smallest loops for undirected simple graphs. However, for directed simple graphs, the smallest ones are length-2 loops, and their frequency gives a measure called reciprocity. For social networks, typical values are , which is quite high compared to most non-social networks. Local clustering coefficients can be used to find structural holes. That is places in the network where we would expect a link to exist, due to transitivity, but there isn't one. Structural holes are bad for information flow (or other flows) in a network because they limit the paths it can take. However, they are usually good for the node that has low local clustering coefficient because it means that that node has more control over the flow, as most of its neighbours will have to direct their flow through it. Thus local clustering coefficient is sometimes used as a centrality measure in this sense, where a more central node has a lower . Another way to find structural holes is via the redundancy of a node, , defined as the mean (that is, average over neighbours of i) number of [[neighbours of i] that a neighbour of i is connected to]. This can be shown to be related to by: .
Self-driving cars, google, tesla Smart vehicles - IoT http://www.techinsider.io/images-of-the-hyperloop-technologys-test-track-2016-3 Tesla Motors https://en.wikipedia.org/wiki/Electric_aircraft http://www.tandfonline.com/doi/abs/10.1080/00207540500142274 Drones https://www.sciencedaily.com/releases/2013/04/130403122013.htm On the performance of electrohydrodynamic propulsion
citing papers Electrohydrodynamic thrust density using positive corona-induced ionic winds for in-atmosphere propulsion
We conclude that EHD propulsion has the potential to be viable from both an energy efficiency perspective (our previous study) and a thrust density perspective (this paper), with the greatest likelihood of viability for smaller aircraft such as unmanned aerial vehicles. On the Thrust of a Single Electrode Electrohydrodynamic Thruster :O Performance characterization of electrohydrodynamic propulsion devices Noone talks about power storage and supply problem? “The voltages could get enormous,” Barrett says. “But I think that’s a challenge that’s probably solvable.” For example, he says power might be supplied by lightweight solar panels or fuel cells. Barrett says ionic thrusters might also prove useful in quieter cooling systems for laptops. http://www.scielo.br/scielo.php?pid=S1806-11172015000300307&script=sci_arttext A Review of Future Propulsion Technologies Passenger drone World's first passenger drone cleared for testing in Nevada Esoteric ideas Lightcraft.. A tree is a combinatorial structure recursively defined to be {a node and a sequence of trees}. See Symbolic method for unlabelled structures See also the particular kind: Tree (Graph theory) Graph-Theoretic Concepts in Computer Science: 29th International ..., Volume 29 A tree, in graph theory, is a connected, undirected graph that contains no closed loops. A forest is a disconnected graph whose connected parts are trees. A tree in graph theory is a particular kind of Tree (combinatorial structure). Trees are often drawn in a "rooted" manner. However, topologically, no node is distinguished as a root, and we could choose any node to be the root in this representation. Properties Diagrams used in Coding theory https://en.wikipedia.org/wiki/Convolutional_code#Trellis_diagram See also Finite state channel for examples An ordered collection of objects. Tuples are found, for instance, as elements of a Cartesian product In computer science, a turmite is a Turing machine which has an orientation as well as a current state and a "tape" that consists of an infinite two-dimensional grid of cells. For Simple contagions, a node can get infected by simple exposure to another infected node (possibly with a certain probability or rate). These are mostly compartmental models, and their extensions are used to model mostly biological contagions (like infectious diseases), as well as some IT contagions (like computer viruses) For Complex contagions, nodes get infected by more complex processes, often involving several other nodes. These are often used to model more complicated social contagions and phenomena. See Social dynamics See also wiki page: Complex contagion Types of models used in the study of Percolation, and Percolation theory Remove nodes (each with a given probability; or a fixed fraction. These are the same in the limit of infinte ). Can have Remove edges Prunning process for obtaining K-core of a network: one removes all nodes with fewer than K neighbours, and repeats this process Percolation processes that show a discontinuous, or at least very steep phase transition. http://research.microsoft.com/en-us/um/people/holroyd/boot/ An "infection" process in which nodes become infected if sufficiently many of their neighbors are infected. Related to the Centola-Many threshold model for social contagions. One construes "connectivity" as implying that a sufficiently short path still exists after some network components have been removed. To appreciate this idea, imagine trying to navigate a city in which some streets are blocked. Percolation of K-cliques (completely connected subgraphs of K nodes) has been used to study the algorithmic detection of dense sets of nodes known as "communities" (see Uncovering the overlapping community structure of complex networks in nature and society pdf). A type of process that is non-self-averaging, in the sense that the relative variance of the size of the largest component doesn't vanish in the thermodynamic limit. Percolation on a directed Network. https://twitter.com/adamatzky?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor Physarum polycephalum Physarum machines and physarum solver. Membrane computing In any lattice, , a subset of is said to be an upper set if implies that for all satisfying , where refers to the Partial ordering defining the lattice. aka symbol code In a variable-length code, one assigns a codeword to each letter in an alphabet. Formally, a variable-length code is a function , where is the source alphabet, and is the code alphabet, and is the Kleene star. The extension of is the natural expension of to The codewords are all the elements of the codomain of . is uniquely decodable if is one-to-one. (IC 2.2) Symbol codes - terminology and notation An example is Morse code The Virgo Supercluster (Virgo SC) or the Local Supercluster (LSC or LS), one of the millions of superclusters, is a mass concentration of galaxies that contains the Virgo Cluster in addition to the Local Group, which in turn contains the Milky Way and Andromeda Galaxy. A 2014 study indicates that the Virgo Supercluster is only a lobe of a greater supercluster, Laniakea, which is centered on the Great Attractor. A representation of a fraction of physical memory in a computer, which is created by the Operating system for Memory allocation What is virtual memory, how is it implemented, and why do operating systems use it? VR https://www.youtube.com/playlist?list=PLbMVogVj5nJSyt80VRXYC-YrAvQuUb6dh Bringing Virtual Reality to the Web http://andsynchrony.net/projects/loop/ The Untold Story of Magic Leap, the World’s Most Secretive Startup http://www.magicleap.com/ http://www.slideshare.net/alexglee/magic-leap-augmented-reality-strategy-insights-from-patents Axon VR - new inmersive VR also see Omni viscoelasticity is the property of a material that displays both viscosity, and elasticity. Such materials are called viscoelastic. Hookean solid: Shear strain proportional to shear stress. The proportionality constant is , the shear modulus. Newtonian fluid: Rate of shear strain proportional to shear stress. The proportionality constant is , the viscosity. Viscoelastic materials: Different responses at different time-scales. Often: elastic response with fixed strain when stress is first applied, but after a relaxation time, , the fluid becomes viscous and the strain then increases linearly. Shear-thinning fluid: Viscosity decreases with shear rate. Shear-thickening fluid: Viscosity increases with shear rate. The latter three behaviours can often be associated with the fluid being a dispersion of colloidal particles. In reality, all fluids are slightly viscoelastic, but the relaxation times are very small indeed. When you apply a stress to a fluid, its energy instantaneously increases because you are pushing atoms together. This exerts back a force that sustains the stress momentarily. The difference between a fluid and a solid, is that the fluid can very quickly rearrange the atoms to a state of lower stress (without needing to break many expensive bonds due to the crystalline order). The key for the fluid to have an instantaneous shear modulus though, is that the timescale for the opposing force from compressing the atoms together to emerge is still less than the relaxation time, I think. A way to estimate this relaxation time for the fluid is by considering the atoms that get trapped in "cages" by neighbouring atoms This atom is a higher energy (and lower entropy) state and to relax needs to overcome the potential barrier due to its neighbouring atoms. Due to the stochastic nature of this process, the relaxation time will follow an Arrhenius behavior with (where is the "frequency" of attempts to scape). Plugging in measured or estimated values, this gives –s, which explains why the fluid appears viscous in the timescales of most experiments. By looking at Fig. 1, we can estimate the viscosity of a fluid to be , which thus depends rather strongly on temperature. This turns out to be the basis for the liquid to glassy transition. However, as the temperature approaches the glass transition temperature, the temperature dependence of the relaxation time (and thus viscosity) changes. The viscosity in fact is found to appear to diverge at a finite temperature, as described by the Vogel-Fulcher law. As the relaxation time becomes large enough the system falls out of equilibrium with respect to experimental time scales, and the liquid forms a glass. The transition to a glass is however not a (thermodynamic) phase transition. It depends on the rate at which we lower the temperature, and it is in fact a kinetic transition (see Soft matter Jones book secion 2.4). The situation here is sometimes called broken ergodicity (I think: isn't this similar to what happens in phase transitions with spontaneously broken symmetries? While there is no full theory of glass formation yet, a few have been proposed. An early approach is the free volume theory but its assumptions are questionable and sometimes predictions don't agree with experiment. More modern theories use the idea of cooperativity: as the temperature is lowered, the density is lowered too, and the molecules get more "cramped" together. Then, for a molecule to move its neighbours must move in a certain cooperative fashion. See work by Adam and GIbbs. Elasticity in solids Apart from the shear modulus described above for Hookean solids, there are also: A simple calculation (see Soft matter Jones book page 13) shows that for a Hookean solid (atoms connected by Hookean springs), Young modulus is , where is spring constant per spring, and is equilibrium interatomic separation. By considering a real potential expanded around its minimum (and considering the typical shape of this potential, like a Lenard-Jones potential), we can see that this is on the order , where is the energy of the interatomic potential minimum, i.e. the bond energy. This means that a material with a high density of strong bonds is still, while a material with a low density of weak bonds is floppy (soft). It is important to note that real solids are in fact observed to exhibit a kind of viscosity. If the stress is applied long enough, a solid with impurities, dislocations, etc. can creep when these dislocations move around (as they only involve the breaking of a few bonds, they are much more likely than a perfect crystal's strain incresing). See Principles of CMP book, also remember how stable the square lattices of bucky balls were?
HTC VIVE, STAR VR, Oculus Rift, Sony VR, Samsung Gear... The Void. Magic Leap, etc. Warp drives do time travel. Geometrical matter of starting and ending points, independent of how one travels. Still, we don't know if they are actually possible, though they are rather unphysical on many respects http://people.idsia.ch/~juergen/schickard.html https://en.wikipedia.org/wiki/Wilhelm_Schickard (1592 - 1635) "Computer history starts in 1623, when Wilhelm Schickard built mankind's first automatic calculator.
Schickard's machine could perform basic arithmetic operations on integer inputs. His letters to Kepler, discoverer of the laws of planetary motion, explain the application of his "calculating clock" to the computation of astronomical tables. The non- programmable Schickard machine was based on the traditional decimal system. Leibniz subsequently discovered the more convenient binary system (1679), an essential ingredient of the world's first working program- controlled computer, due to Zuse (1941)." For linear differential equations of any order, with non-constant coefficients (in general). See here and here. As shown in the example in the notes, multiple scales fails when the frequency of the fast oscillation depends on the slow scale. Then, one has to instead use the WKB ansatz: in the dispersive case, or in the dissipative case. When substituting this in an equation, given in a certain form (a form in which all second order ODEs can be expressed, see first lectures by Bender), one gets a series of equations for the term of increasing order in Turning points Use Matched asymptotic expansions. In the turning point itself, the leading order solution is an Airy function There are many variants. Assumptions: The definition can be found here: Definition (Haploid Wright-Fisher model with selection): In a panmictic, haploid population of constant size , where individuals are of type and : if generation at time consists of individuals of type , and of type , then, according to the Wright-Fisher model with selection, the generation at time is formed by individuals, each of which has a probability to be of type given by: and is of type otherwise. The process is called sampling with replacement, because we are, in effect, replacing each individual of the previous population by a new one, which follows a given distribution of alleles (type). is called the selection coefficient, and is the fitness of type . If, we give a fitness to type , then we use And one can see how this would be generalized for more possible types in the model. The way this probability comes about is: If for all types, selection doesn't play a role, and the model describes genetic drift only. Also described here. Starting from the same setup as above (for the Haploid Wright-Fisher model with selection), the definition for the model with mutation is: Definition (Haploid Wright-Fisher model with 'selection and mutation): If there are individuals of type among parents (and individuals of type ), and we have mutation rates for , and for , then, the probability of type (also called the proportion of potential offspring, in frequentist language, used often in biology) is: As above, as each of the individuals in the next generation (offspring) have a type independently following this distribution. The number of type offspring follows a binomial distribution Fixation See page 326 in here for instance See this question Branch of biology that studies animals See Tree of life Study of animal behaviour: EthologyCentral ansatz and simplicity bias:
Examples of simplicity bias in maps
Selection rules
Single swimmer hydrodynamics: background
Far-flow fields
chlamydomonas flow sourceSingle microswimmer hydrodynamics: applications
Collective hydrodynamics of active entities
Collective hydrodynamics of active entities: applications
Other applications
Self organization in active matter



Fig 1.
A network is acyclic if and only if it has a nilpotent adjacency matrix Algorithm to compute LZ complexity measure
def KC_LZ(string):
n=len(string)
s = '0'+string
c=1
l=1
i=0
k=1
k_max=1
stop=0
while stop==0:
if s[i+k] != s[l+k]:
if k>k_max:
k_max=k # k_max stores the length of the longest pattern in the LA that has been matched somewhere in the SB
i=i+1 # we increase i while the bit doesn't match, looking for a previous occurence of a pattern. s[i+k] is scanning the "search buffer" (SB)
if i==l: # we stop looking when i catches up with the first bit of the "look-ahead" (LA) part.
c=c+1 # If we were actually compressing, we would add the new token here. here we just count recounstruction STEPs
l=l+k_max # we move the beginning of the LA to the end of the newly matched pattern.
if l+1>n: # if the LA surpasses length of string, then we stop.
stop=1
else: #after STEP,
i=0 # we reset the searching index to beginning of SB (beginning of string)
k=1 # we reset pattern matching index. Note that we are actually matching against the first bit of the string, because we added an extra 0 above, so i+k is the first bit of the string.
k_max=1 # and we reset max lenght of matched pattern to k.
else:
k=1 #we've finished matching a pattern in the SB, and we reset the matched pattern length counter.
else: # I increase k as long as the pattern matches, i.e. as long as s[l+k] bit string can be reconstructed by s[i+k] bit string. Note that the matched pattern can "run over" l because the pattern starts copying itself (see LZ 76 paper). This is just what happens when you apply the cloning tool on photoshop to a region where you've already cloned...
k=k+1
if l+k>n: # if we reach the end of the string while matching, we need to add that to the tokens, and stop.
c=c+1
stop=1
# a la Lempel and Ziv (IEEE trans inf theory it-22, 75 (1976),
# h(n)=c(n)/b(n) where c(n) is the kolmogorov complexity
# and h(n) is a normalised measure of complexity.
complexity=c;
#b=n*1.0/np.log2(n)
#complexity=c/b;Kolmogorov complexity
Bounds
Relations to entropy
Algorithmic randomness and incompressible sequences
Universal probability
Imagine a monkey sitting at a keyboard and typing the keys at random.The halting problem noncomputability of Kolmogorov complexity
Chaitin's
Universal gambling
Universal prediction
Occam's razor
Coding theorem
Algorithm visualizer

Symbolic method
Symbolic method for unlabelled structures (Ordinary generating function)
Symbolic method for labelled structures (Exponential generating function)
Tissues
Organs
Organ systems
Theoretical framework
Eq.3 Polymorphic limit
Eq. 4 Monomorphic limit

Simulations in model GP maps
Random GP map:
RNA secondary structure mapping
The arrival of the frequent



Summary/Discussion
Mathematical modelling of neural networks
Optimization
[image above, wait until it loads, you also need to be signed into google]
Types of neural networks
Mathematical modelling of neural networks
Statistical mechanics of neural networks
A network partition with exhibits "assortative mixing" A network partition with exhibits "disassortative mixing" Order notation
Uniqueness of asymptotic series
Numerical use of divergent series
Parametric expansions
Integration by parts (IBP)
Laplace-type integrals
Laplace method
Method of stationary phase
Method of steepest descents
Splitting range of integration
Bounding integrals



Magic lantern
(Mandelung's rule)








Animating maths
Interacting with maths
This is just what I meant when I said AugMath aims for Virtual Reality as a platform. And it is awesome: https://vimeo.com/150928998 
Check Ket algebra editor
Finite automaton
Input affects dynamics
Input affects initial state
Output
Ifinite automaton
Networks of automata
Cellular automata
Graph dynamical system
Automatic complexity
Automaticity
Finite state dimension
Finite state complexity
NFA based complexity
State complexity

Simplicity bias
Effects of bias in GP maps
Arrival of the frequent
Common features of GP maps
Origin of bias in GP maps
Encyclopedia of Life
Open Tree of Life
Description levels in biology
Levels 1,2: Ecology
Levels 2,3: Biodiversity & evolution
Tree of life
Levels 4,5: Organism biology
Levels 6,7: Cell biology
Levels 8: Molecular biology
General methods
Quantitative biology
Mathematical biology
Normalized BDM

Implementation
BN simulator on the web! http://rumo.biologie.hu-berlin.de/boolesim/
Dynamics of Boolean networks
Statistical mechanics of Boolean networks



Discrete space: random walk

Continuous space

Classification
Physics
Mechanics
Thermodynamics
Modern physics: statistical physics, quantum mechanics
![]()




Passive transport
Active transport
Ion channels
Endocytosis and exocytosis
Statistical mechanics of cellular automata
Computation theory of cellular automata
Elementary cellular automaton (wiki)


Examples










Cellular respiration stages
Glycolysis
Krebs cycle
Electron transport chain
Characteristics of chaos
Routes to chaos
Period doubling
Intermittency
Blue sky catastrophe

Croning group Very nice research in inorganic biology, evolution, synthesis, and applications..
![]()

Supervised classification
Ordered categorical classification
Types of codes
Codes for transmission/storage reliability
Codes for transmission/storage efficiency
Recent review: Emergent behavior in active colloids
–Phoretic mechanisms of self-propelled colloids–
–Collective behaviour–
Effective interactions of active colloids
Stochastic equations of motion

Concentration fields
Colloid number density and orientation density (averaged equations)
Collective Behavior of Thermally Active Colloids
Emergent Cometlike Swarming of Optically Driven Thermally Active Colloids

is the drift velocity of the thermophoretic attraction due to the far-field temperature field gradient created by a particle causing a thermophoretic response on the other (here we assume Soret coefficient is negative so that they attract (particle climbs up gradients)). is the self-thermophoretic drift velcotiy due to the particle interacting with the temperature gradient on its surface created by its own non-uniform illumination. Colloid physics
Microhydrodynamics of colloids
Phoretic mechanisms of colloids
Properties of communication systems

Dynamic theory of nematic Liquid crystals
Suspension dynamics
Interesting idea about emergence and complex systems: Sloppy systems Complex Systems: A Survey
Methods and Techniques of Complex Systems Science: An Overview
Complexity measures
Actually here I am referring to "complexity" as used in Complex systems theory. As Wiki says, Complexity theory can also refer to Computational complexity or Descriptional complexity (a fundamental concept in Algorithmic information theory). 
In-components are all the vertices from which one can reach a certain vertex, including the vertex itself.

Halting problem
https://moleculamaxima.com/
Kolmogorov complexity
Computer algebraic geometry
Computer linear algebra
Other computer algebras
Ket algebra editor
GPUs


Condensed vs non-condensed
Solid vs fluid
Hard vs soft
Condensed forms of matter
Non-condensed forms of matter

Constitutive equations in non-equilibrium
Control theory
Control systems
Switched systems
Convnet demo on the web! details here


Sentence ConvNets
Critical phenomena
Scaling hypotheses
Upper critical dimension
Real-space renormalization group
Inner Universe [extension of #cyberself]Places & places - Mapping science
A nice categorization: Wiki Category:Fundamental categories
Data compression theory
Data compression codes
Lossless coding
Lossy coding
Data transmission system
Data transmission theory
Data transmission system engineering
Data types
A Neural Algorithm of Artistic Style
Deep dream
New advances in deep learning
Features of deep learning
Deep learning methods
Neural networks for spatially structured data
Multi-instance learning
Neural networks with memory
Deep learning theory



Student Name Guillermo Jorge Valle Perez Award Programme MMathPhys Mathematical & Theoretical Physics College Magdalen College Date of Ceremony Friday 30 September 2016 Time 2:30 pm Number of Guaranteed Ceremony Tickets 3 Hold Status None Ceremony Status You have chosen to attend the above ceremony You have agreed to the University Terms and Conditions regarding Degree Ceremonies.
A network where all nodes have same degree is called 'regular'.Directed networks
Kolmogorov complexity
Complexity measures based on data compression
Automata-based descriptional complexity
Entropy-based complexity measures
Permutation complexity
Network complexity
See Self-diffusiophoresis, and Diffusiophoresis for theory Designs of self-diffusiophoretic particles
Spherical

Thin rod

Coordinate transformation


Diffusion equation
Applications
Smoluchowski capture rate
Phoretic mechanisms of colloids



Deep art


Class structure
Disordered system models
Self-averaging
Dispersion types
Medium Dispersed medium Gas Liquid Solid Continuous medium Gas None (because all gases are mutually miscible) Colloidal: Liquid aerosol Colloidal: Solid areosol. Coarse: Dust Liquid If dipersed phase has enough concentration: Foam Colloidal: Emulsion Colloidal: Suspension Solid Porous solid filled with gas. If dipersed phase has enough concentration: solid Foam Porous solid filled with liquid, like Gels Colloidal: Solid sol, like Cranberry glass. Coarse: conglomerates Structure of DNA

Processes of DNA
Chemistry of DNA
Information in DNA
Programmable motion of DNA origami mechanisms
Mechanical design of DNA nanostructures
Methods and techniques
Applications and engineering

Nonlinear systems
Networks
Cannabis
Benzodiazepines
Cocaine
Drug related anxiety
Drug related infections
Drug related mood
Drug related personality
etc.Free (unforced) Duffing oscillator
Free undamped Duffing oscillator
Free damped Duffing oscillator
Forced Duffing oscillator
Nonlinear resonances
Onset of chaos
Activities and Sensitivities in Boolean Network Models
Random Boolean networks: Analogy with percolation (Stauffer)
Types of dynamical systems
Deterministic vs probabilistic dynamics
Encyclopedia:Dynamical systems
Deterministic processes on networks
Random processes on networks
Figure 2.
"Remanence" behaviour
Memory effects
Theory of non-equilibrium behaviour of spin glasses

Diagram of a product cycle (showing the main phases in the life of a product).Raw material extraction
Production of goods
Distribution of goods
Consumptions of goods
Disposal and recycling of goods

Arrival of the frequent

Robustness and evolvability
Examples of GP map bias

Electrostatics
Magnetostatics
Electromagnetism
See this course: MAE6240 Fall 2012
Fuel-based energy production
Renewable energy production
Hydro-electric plant

Wind power

Solar power




Entropy rate
Topological entropy

Metric entropy

Types of contagions
Types of contagions
Ergodic hypothesis
Equilibrium ensembles
Fundamental postulate
Partition function
Thermodynamics
Laws
1st law
2nd law
3rd law
Thermodynamic potentials
Applications
Forward error correction
Quantum error correction
Evolutionary biology
Modern evolutionary synthesis
Evolution theory
Neutral theory of evolution
Evolutionary developmental biology
Features of evolving systems
Effects in evolution
Genetic information and evolution
Evolutionary computing
Evolution in Complex systems
Bias in GP maps
Genetic programming
Evolvable harware
Artifical life
Entropy of a Finite State Transducer

0 2 1 1
0 1 0 1
1 4 1 0
1 3 0 0
2 1 1 1
2 0 0 0
3 2 1 0
3 1 0 0
4 1 1 0
4 1 0 0
0
1
2
3
4



Achlioptas processes
k-vertex rule percolation process
Half-restricted processes

Spanning cluster-avoiding process
Applications of explosive percolation models


Examples of finite state channels
Trellis diagrams
Definitions and basic properties of FSCs

Entropy rate of a finite state process
Deterministic finite automaton
Non-deterministic finite automaton
See also Finite-state transducer



Convection-diffusion
Applications of Fokker-Planck equation
Mathematical properties of FP eq
Food innovation




Iterated function system

JavaScript
CSS libraries
Graphics and visualization web libraries
Audio libraries
Input libraries
CMS
Types of function
Functional programming in Javascript
map() function so that one can map these collections to other collections of the same size. One can made the analogy more precise between these and functors in Category theory which is very related to functional programming ideas.Libraries
Redux is a nice functional programming-like framework for React. Learn redux.
Mathematical study of games
Game theory
Combinatorial game theory
GP map bias
Models

Mendelian genetics
Population genetics
Genetic engineering


Force networks

https://goocreate.com/


Precambrian
Hadean
Archean
Proterozoic
Phanerozoic
Paleozoic
Cambrian
Ordovician
Silurian
Devonian
Carboniferous
Permian
Mesozoic
Triassic
Jurassic
Cretaceous
Cenozoic
Paleogene
Neogene
Quaternary



Algorithm

Humans
Human society
Independence of two random variables
Mutual independence
Pairwise independence
Conditional independence
Applications

Types of information source
Entropy/Information
Coding theory
Data transmission
Data compression
Cryptography
Network information theory
Algorithmic information theory
More related areas
Microscopic interfacial forces
Mesoscopic interfacial forces
Theory of interfacial forces
Redux is a nice functional programming-like framework for React. Learn redux.
JS libraries
Functional programming on JavaScript
JS animation libraries
Meteor (JS)
Math JS libraries
Other
Spinodal decomposition

Paucity theorems
Solving the Langevin equation
Non-inertial regime
Watson lemma
Laplace method
Genera Laplace integral
a poset in which every pair of elements posseses a join and a meet 


![]()

Relations with entropy

Least mean squares
Landau-de Gennes bulk free energy density
Generalized elasticity of liquid crystals
See Complex fluid dynamics for the dynamics of liquid crystals

Examples
Supervised learning
Unsupervised learning
Variations on supervised and unsupervised
Semi-supervised learning
Active learning
Decision-theoretic learning
Reinforcement learning
Learning theory and Learning algorithms
Deep learning
Bayesian inferential statistics
Can artificial intelligence create the next wonder material?

Method of matched asymptotic expansions
Matching of asymptotic expansions
Prandtl matching rule
van Dyke matching rule
Intermediate variable matching
Composite expansion
Boundary and transition layers
Common structures
Types of edges
Undirected: . Directed: Weighted: edges can have any real value associated. Unweighted: can only have 0 or 1 (a.k.a. binary). Representations
Common Types
Other Mathematical aspects
Eq.1 Eq.2 Eq.3 Types of measures
Centrality measures
Custom stylesheet.
iframe. However, I need then to substitute document window.parent.document to access stuff in our document. Maybe I can define a Javascript Macro like the one below that takes javascript code as input, and does the right things and adds the iframe dressing, etc.! Example of custom Macro
exports.name = "testMacro";. It then runs the exports.run function.TODO
root path for offline file:// links!Topics in metaphysics
Aristotle's metaphysics
Continental rationalists and metaphysics
Method of steepest descents
Interfacial forces
Year
Assessment Code
Assessment
Assessment Type
Mark
Grade
2015/16 A12169 Nonlinear Systems (Combined) Overall Mark 77 - 2015/16 A12206 Perturbation Methods Written 77 - 2015/16 A13117 Networks Submission 73 - 2015/16 A15088 Quantum Field Theory Written 100 - 2015/16 A15089 Kinetic Theory Written 75 - 2015/16 A15091 Scientific Computing I Submission 70 - 2015/16 A15275 Soft Matter Physics Practical - Pass 2015/16 A15280 Scientific Computing II Submission 100 - 2015/16 A15282 Nonequilibrium Statistical Physics Written 80 - 2015/16 A15416 Topics in Soft and Active Matter Physics Practical - Pass 2015/16 A15417 Complex Systems Submission 76 - 2015/16 A15430 Oral Presentation Oral - Pass Hilary Term
Trinity term
Exams timetable
Networks
On Spatial networks
Complex systems
On Percolation
Nonlinear systems
See slides at slides for a nice and ordered presentation of the ideas.
Evolution
Bias in GP maps
Simplicity bias
Effects of bias in GP maps
Arrival of the frequent
Common features of GP maps
Origin of bias in GP maps
Genotype-phenotype map (GP map)

Survival of the flattest
Applications to Deep learning and ANNs? Chico's application to networks. His slides
Relation b/w bias for simplicity in GP maps, and regularization in Machine learning.
Preferential attachment
de Solla Price's model (dSP model)
Barabási–Albert (BA) model
Other properties of preferential attachment models
Extensions of preferential attachment models
Vertex copying models
Network optimization



Statistical properties of FSTs
Origin of bias ideas
Complexity
To look at
Max-margin learning, transfer and memory networks.

Transistor soundtrack
Nano-drugs delivery systems
Nano-sensors and diagnostics
Nano-devices for medical surgery
Nanotechnology for regenerative medicine
Autodesk Bio/Nano Research
AI and nanotechnology
Nanoengineering
Atomically precise manufacturing
See here: http://physics.stackexchange.com/questions/265752/can-osmosis-go-the-other-way/265879#265879
MECHANISM OF OSMOTIC FLOW IN POROUS MEMBRANES
Algorithmic complexity of a graph
Complexity and edge density

Complexity vs symmetry of the graph

Information content in a graph
Symmetry of grahs
Empirical study of Networks
Fundamentals of network theory
Mathematics of networks
Measures and metrics for networks
Large-scale structure of networks
Computer algorithms
Basic concepts of algorithms
Fundamental network algorithms
Matrix algorithms and graph partitioning
Network models
Random graphs
Random graphs with general degree distributions
Models of network formation
Other network models:
Processes on networks
Percolation and network resilience.
Epidemics on networks
Dynamical systems on networks
Network search
Further network measures and analytics
Community structure in networks
Network complexity
Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing
Hebbian theory
Lecture series by Balakrishnan!
Other aspects and approaches, some of which have lead to the recent understanding of systems very (even arbitrarily) far from equilibrium). In approaximate chronological order:
Figure. Artistic view of a driven molecular motor-cargo complex with its trajectories. Image by Daniel Schmidt, University Stuttgart.Discontinuity in k-vertex rule percolation processes

Phase portrait features and attractors
Equilibrium points
Classifications
Bifurcation theory
Poincare maps
Features of maps
Stability
Bifurcations in 1D maps
2D maps
Examples
Henon map
Standard (or Chirikov) map
See more examples of chaotic maps in Chaos theory
Relaxation oscillations and transition layers
Synchronization and coupled oscillators
Nonlinear continuous dynamical system
Nonlinear oscillations
Nonlinear maps (aka Nonlinear Discrete dynamical system)
Chaos theory
The formula given in the slide of Solomonoff is not a probability, but an expected number of times a certain program will come up, that's why it's not normalized. For long codes, it's approximately a probability, though. No
Note this is calculated with zlib complexity..See code here
Initial value problems (IVP) for ordinary differential equation (ODE)
ode23: low-order RK
ode45: higher-order RK
ode113: variable-order multistep
ode23s, ode15s, ode15i, ode23t, ode23tb variants for stiff problems etc.N = chebop(a,b) % define the interval [a,b]N.op = @(x,u) ... % define the ODE, with diff(u,k) = kth derivative of uN.bc = ... % boundary conditionsOrder of accuracy, convergence, stability, etc.
Convergence consistency stability Partial differential equations
Data hiding: One can only access instance values through defined methods. Sometimes built in language, but even if not, it is often good practice. . notation.next to define how iteration happens over an object that represents a collection.dir(p) shows all methods associated with an object. type(instance) returns the class.I think the answer to these questions lies in Systems theory, Mathematics, and Science.
Concepts
Managing processes
System calls
File system
–Gradient descent
Newton's method.
Stochastic gradient descent
–Constrained optimization
Linear programming, used in Operations research
Nonlinear programming
–Heuristic optimization
Hyperoptimization
Simulated annealing ! http://www.mit.edu/~dbertsim/papers/Optimization/Simulated%20annealing.pdf

Evolving automata
Algorithmic information theory


Geodesic path
Eulerian and Hamiltonian paths
from hereReferences
Percolation theory
Percolation phase transition
Percolation phase transition
Critical phenomena in percolation
Types of percolation models
Applications of percolation models
Applications to porous materials
Applications to the study of landscapes







Giant component and phase transition
Basic concepts
Percolation on hypercubic lattices
Percolation on Bethe lattices
Percolation on random graphs and networks
Percolation thresholds
Continuity of percolation phase transition
Continuum limit of percolation models
Relations between percolation models and Potts models
Infinite clusters

Mathematical foundation: Asymptotic approximation
Applications
Perturbation methods for algebraic equations
Asymptotic approximation of integrals
Perturbation methods for differential equations
Local analysis
Local analysis of differential equations
Global analysis
Matched asymptotic expansions
Method of multiple scales
WKB method
Perturbation methods for difference equations
Iterative method
Espansion method
Singular perturbations
Non-integral powers
Finding the right expansion sequence
Logarithms
Order parameters and phase fields
Landau theory of phase transitions
Critical exponents and universality


Phoretic mechanisms
Mechanisms

Theory of phoretic mechanisms of self-propelled colloids
Macroscopic/thermodynamic description
Microscopic mechanism
Nice map of physics: http://scimaps.org/maps/map/being_a_map_of_physi_171/detail
Plant biology (botany)
Evolution of plants
Examples of polymers
Polymer architecture

Polymer statics
–Isolated polymer molecule in solution
The ideal chain
Distribution of segments in the polymer chain
Non-ideal chains
–Concentrated solutions and melts
Thermodynamic properties
–Polymer gels
Polymer dynamics
Molecular motion of polymers in dilute solution
Rouse theory
Zimm theory
Molecular motion in entangled polymer systems
Rheology of polymers
Porous solids
Granular material
Fibrous material
Power-law distributions in empirical data
Random variable
Probability distribution function
Moments and cumulants
Generating functions
Central Limit Theorem
Combinatorics


Programming language paradigms
Programming languages
C/C++
Python
JavaScript
Bacteria

Second quantization
Electrons in solids
Quantum liquids
Superconductors, superfluids
Trapped ultra-cold gases



of lattice sites; thus perhaps the nearest-neighbour square lattice is not the most realistic model of these biological aspects.Enumeration and random generation of accessible automata Stirling numbers of the second kind Configuration model
Random graphs with clustering
Statistics of attractors
Probability distribution of size of basin of attraction
Probability distribution of
Probability that a random map of points is indecomposable (i.e. map has a single attractor)
Probability distribution of number of attractors
Probabilities related to a point chosen at random
Random mappings with constraints, and other extensions
Some applications

Components
Component properties (props)
States
References (refs)
Component lifecycle
shouldComponentUpdate to stop the component from rerendering, the state and props are still updated.Higher order components

Linear regression
Nearest-neighbour classification
Kernel linear regression
Nonlinear regression
Markov_decision_process: definition

Optimal policy problem
Learning algorithms




Formal sciences and philosophy
Natural science
Systems sciences
Linear algebra
Optimization
Differential Equations
Design strategies for self-assembly of discrete targets
Effective phoretic interactions
Cluster with oscillatory instability
Cluster with run-and-tumble behaviour
D'Alembert's principle in overdamped dynamics
Diffusiophoresis
Designing phoretic micro- and nano-swimmers
Collective behaviour of active colloids
Robotic microswimmers
Phoretic swimmers
Ramin's papers
Topology
Measure on sequence spaces
Fig Katz Sim
SI model
SIR model
SIS model
SIRS model
Simplicity bias in finite state transducers
Simplicity bias in Boolean networks?
Simplicity bias in other discrete systems
Contribution to diffusion
Swimmers in Poiseuille flow
Surfaces
Regularization method
Finding the right scaling
James Sethna: Sloppy models and how science works (video)
A network doesn't need that many shortcuts to have scaling of the geodesic distance that is , instead of , i.e. to be a "small-world" tail: http://www.computerhope.com/unix/utail.htmcd /var/log/tail -c 10000 skeylogger.log Regular solution model
Breakthrough StarShot

Man-made networks
Physical networks
Biological networks
Geometrical graphs
U = fft(u) (using MATLAB notation for fast Fourier transform (FFT), an efficient algorithm to compute DFT).w = ifft(u).Fourier series
Laurent series
Chebysev series
Spin glass models
Mechanisms underlying spin glass behaviour
Spin glass materials
Static features of spin glasses
Dynamics of spin glasses

Physical basis of spindle self-organization
Theory
Experimental validation
Internal dynamics of spindle
http://www.pnas.org/content/111/52/18496/F1.expansion.html
http://www.pnas.org/content/111/52/18496/F2.expansion.htmlMorphology of the spindle
http://www.pnas.org/content/111/52/18496/F3.expansion.html
Origin of Universality (Why do many field theories look like each other?)
The Landau-Ginzbyurg Hamiltonian
PRE- More Kaleidoscopes for April 2016
Statistics software
Statistical measures
Watch: Physics - Physical Applications of Stochastic Processes by Prof. V. Balakrishnan
Examples
Classification of models
Descriptions
Important results
Computational methods
Other mathematical aspects
Applications
Discriminative learning
Regression
Classification
General methods
Generative learning
Gaussian discriminant analysis
Naive Bayes
Model assessment
Cross-validation
Surface chemistry
Surface physics
Fluid dynamics at interfaces


Trees and Catalan numbers
Strings
Powersets and Multisets
for all , implies Supervisors (for 1st year)
EPSRC studenship
Kellogg college
Research interests
Taxa
List_of_emerging_technologies

,-pp.-502.jpg)


It would be interesting to devise artificial methods to search for such undiscovered ribozymes (those that are very improbable for evolution to find), some of which may be more fit than those that Nature has found. Language:
Computability theory
Models of computation
Finite-state machine
Unconventional computing
Fig. 1
Fig. 2

Determinant of a graph
Connections with lattices
Analytical properties
Examples of topologies
Related spaces
Autonomous ground vehicles
Hyperloop

Electric cars
Electric plane

Distribution and supply-chain
Unmanned aerial vehicles
Ionocraft
Types
Applications
Computer Science
Network theory
Physics
Site percolation
Bond percolation
K-core percolation
Explosive percolation
Bootstrap percolation
Limited path percolation
K-clique percolation
Percolation in Multilayer networks
Non-self-averaging percolation process
Correlated percolation
Directed percolation
Other
https://goocreate.com/
The response of matter to a shear stress
Fig 1.

Father of the computer age
Haploid Wright-Fisher model with selection
Haploid Wright-Fisher model with selection and mutation
Diffusion approximation
Vertebrates
Invertebrates
Insects